Test Report: Docker_Linux_crio 21647

                    
                      f5f0858587e77e8c1559a01ec4b2a40a06b76dc9:2025-10-18:41961
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.23
35 TestAddons/parallel/Registry 13.97
36 TestAddons/parallel/RegistryCreds 0.44
37 TestAddons/parallel/Ingress 144.77
38 TestAddons/parallel/InspektorGadget 6.23
39 TestAddons/parallel/MetricsServer 5.3
41 TestAddons/parallel/CSI 38.47
42 TestAddons/parallel/Headlamp 2.52
43 TestAddons/parallel/CloudSpanner 5.24
44 TestAddons/parallel/LocalPath 8.1
45 TestAddons/parallel/NvidiaDevicePlugin 6.28
46 TestAddons/parallel/Yakd 5.23
47 TestAddons/parallel/AmdGpuDevicePlugin 5.23
98 TestFunctional/parallel/ServiceCmdConnect 602.8
123 TestFunctional/parallel/ServiceCmd/DeployApp 600.61
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.17
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.93
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.51
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.18
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.33
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
154 TestFunctional/parallel/ServiceCmd/Format 0.53
155 TestFunctional/parallel/ServiceCmd/URL 0.53
191 TestJSONOutput/pause/Command 1.59
197 TestJSONOutput/unpause/Command 2.13
282 TestPause/serial/Pause 5.34
348 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.15
350 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.36
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.52
360 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.72
371 TestStartStop/group/no-preload/serial/Pause 7.41
373 TestStartStop/group/old-k8s-version/serial/Pause 7.09
380 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.68
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.12
386 TestStartStop/group/embed-certs/serial/Pause 5.34
392 TestStartStop/group/newest-cni/serial/Pause 5.76
x
+
TestAddons/serial/Volcano (0.23s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162665 addons disable volcano --alsologtostderr -v=1: exit status 11 (233.869445ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:31:20.734300   18782 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:31:20.734624   18782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:20.734634   18782 out.go:374] Setting ErrFile to fd 2...
	I1018 11:31:20.734639   18782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:20.734850   18782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:31:20.735124   18782 mustload.go:65] Loading cluster: addons-162665
	I1018 11:31:20.735467   18782 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:20.735486   18782 addons.go:606] checking whether the cluster is paused
	I1018 11:31:20.735563   18782 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:20.735575   18782 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:31:20.735943   18782 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:31:20.754691   18782 ssh_runner.go:195] Run: systemctl --version
	I1018 11:31:20.754743   18782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:31:20.772208   18782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:31:20.866140   18782 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:31:20.866211   18782 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:31:20.895377   18782 cri.go:89] found id: "488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb"
	I1018 11:31:20.895405   18782 cri.go:89] found id: "a27fdd7026b29e61c0f124b27104ae3956d2aed3110d7b720128e24c0bacc3ec"
	I1018 11:31:20.895411   18782 cri.go:89] found id: "e58b8a219585a9ae96320c366b4c98f0c48358d21f7fb35e348fe8139059d7f9"
	I1018 11:31:20.895416   18782 cri.go:89] found id: "80ee1a432463a8ad3a4376b1f75e176fb6b537149aba4f986e224a7a531ba2b2"
	I1018 11:31:20.895420   18782 cri.go:89] found id: "1c7e5acf2100a7ffae62817db39ede8773b2ec7154e1024f6df4324466851822"
	I1018 11:31:20.895425   18782 cri.go:89] found id: "43a9f95eacc8289c6670fc316e3fc920654dc66aa76a198761a35537e6e3fcec"
	I1018 11:31:20.895429   18782 cri.go:89] found id: "7f162f04036aaf527574c6ac01010e2f827379e18bdc4eaf890380403057279e"
	I1018 11:31:20.895433   18782 cri.go:89] found id: "763f4d62397d6dc0f6a5e51925ddb584fb44a3f2bbed9f528918681dbbd6bef6"
	I1018 11:31:20.895437   18782 cri.go:89] found id: "230e9f4fd374710bc4d70889f01e8c646dbdbed6fe4ac29102ad60f3e1d98d18"
	I1018 11:31:20.895444   18782 cri.go:89] found id: "98ea2b43ee1f985889b32bdfd540789b4f79b7b665ae12fba712166d9fdfd68d"
	I1018 11:31:20.895448   18782 cri.go:89] found id: "c47f2661c734239e8c50f4aef2752bc8c27db6601ea3f442780cbb96bf3187fb"
	I1018 11:31:20.895452   18782 cri.go:89] found id: "7da1e14278c12f7ddce8a0a0317a7585f16e6a2cb0718634ffd628e8b1564fb1"
	I1018 11:31:20.895456   18782 cri.go:89] found id: "03c9856418e49f86ce20ae3c9932b0f0698840f611145c58c7b2d8866d2f1045"
	I1018 11:31:20.895460   18782 cri.go:89] found id: "2d9dfc50ea0d72c6edb7aeb1f80d3aeffcb60ff1588c6aa44fc4a740c0513602"
	I1018 11:31:20.895464   18782 cri.go:89] found id: "f9c877c63013ceff8748532507dbd72e3fc595da82cbcf0558b11733e58c209b"
	I1018 11:31:20.895484   18782 cri.go:89] found id: "07d2ff78db059878fffc6c128c991fcaa07e358737321e30a7ca63865510b349"
	I1018 11:31:20.895489   18782 cri.go:89] found id: "bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb"
	I1018 11:31:20.895493   18782 cri.go:89] found id: "875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3"
	I1018 11:31:20.895495   18782 cri.go:89] found id: "371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a"
	I1018 11:31:20.895498   18782 cri.go:89] found id: "63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb"
	I1018 11:31:20.895500   18782 cri.go:89] found id: "7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660"
	I1018 11:31:20.895502   18782 cri.go:89] found id: "4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92"
	I1018 11:31:20.895504   18782 cri.go:89] found id: "ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51"
	I1018 11:31:20.895507   18782 cri.go:89] found id: "a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956"
	I1018 11:31:20.895509   18782 cri.go:89] found id: ""
	I1018 11:31:20.895547   18782 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 11:31:20.909606   18782 out.go:203] 
	W1018 11:31:20.910904   18782 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 11:31:20.910927   18782 out.go:285] * 
	* 
	W1018 11:31:20.913966   18782 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 11:31:20.915425   18782 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-162665 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.23s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.261081ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-8ns6k" [c800a208-4e00-4ea5-bacc-ab4677684b88] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003647458s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-tsk7w" [34d517d6-de7d-42f2-88d2-ae400f0fce9b] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003102454s
addons_test.go:392: (dbg) Run:  kubectl --context addons-162665 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-162665 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-162665 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.535852295s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 ip
2025/10/18 11:31:43 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162665 addons disable registry --alsologtostderr -v=1: exit status 11 (225.660588ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:31:43.587494   21535 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:31:43.587806   21535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:43.587819   21535 out.go:374] Setting ErrFile to fd 2...
	I1018 11:31:43.587825   21535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:43.588026   21535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:31:43.588259   21535 mustload.go:65] Loading cluster: addons-162665
	I1018 11:31:43.588578   21535 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:43.588597   21535 addons.go:606] checking whether the cluster is paused
	I1018 11:31:43.588674   21535 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:43.588686   21535 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:31:43.589090   21535 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:31:43.607255   21535 ssh_runner.go:195] Run: systemctl --version
	I1018 11:31:43.607305   21535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:31:43.624386   21535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:31:43.718633   21535 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:31:43.718706   21535 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:31:43.747382   21535 cri.go:89] found id: "ff53e54600e125a4c603286ddd3437b940e41d87e89c0a79234afde24316e759"
	I1018 11:31:43.747423   21535 cri.go:89] found id: "488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb"
	I1018 11:31:43.747428   21535 cri.go:89] found id: "a27fdd7026b29e61c0f124b27104ae3956d2aed3110d7b720128e24c0bacc3ec"
	I1018 11:31:43.747433   21535 cri.go:89] found id: "e58b8a219585a9ae96320c366b4c98f0c48358d21f7fb35e348fe8139059d7f9"
	I1018 11:31:43.747436   21535 cri.go:89] found id: "80ee1a432463a8ad3a4376b1f75e176fb6b537149aba4f986e224a7a531ba2b2"
	I1018 11:31:43.747441   21535 cri.go:89] found id: "1c7e5acf2100a7ffae62817db39ede8773b2ec7154e1024f6df4324466851822"
	I1018 11:31:43.747446   21535 cri.go:89] found id: "43a9f95eacc8289c6670fc316e3fc920654dc66aa76a198761a35537e6e3fcec"
	I1018 11:31:43.747450   21535 cri.go:89] found id: "7f162f04036aaf527574c6ac01010e2f827379e18bdc4eaf890380403057279e"
	I1018 11:31:43.747453   21535 cri.go:89] found id: "763f4d62397d6dc0f6a5e51925ddb584fb44a3f2bbed9f528918681dbbd6bef6"
	I1018 11:31:43.747466   21535 cri.go:89] found id: "230e9f4fd374710bc4d70889f01e8c646dbdbed6fe4ac29102ad60f3e1d98d18"
	I1018 11:31:43.747474   21535 cri.go:89] found id: "98ea2b43ee1f985889b32bdfd540789b4f79b7b665ae12fba712166d9fdfd68d"
	I1018 11:31:43.747478   21535 cri.go:89] found id: "c47f2661c734239e8c50f4aef2752bc8c27db6601ea3f442780cbb96bf3187fb"
	I1018 11:31:43.747483   21535 cri.go:89] found id: "7da1e14278c12f7ddce8a0a0317a7585f16e6a2cb0718634ffd628e8b1564fb1"
	I1018 11:31:43.747488   21535 cri.go:89] found id: "03c9856418e49f86ce20ae3c9932b0f0698840f611145c58c7b2d8866d2f1045"
	I1018 11:31:43.747494   21535 cri.go:89] found id: "2d9dfc50ea0d72c6edb7aeb1f80d3aeffcb60ff1588c6aa44fc4a740c0513602"
	I1018 11:31:43.747505   21535 cri.go:89] found id: "f9c877c63013ceff8748532507dbd72e3fc595da82cbcf0558b11733e58c209b"
	I1018 11:31:43.747512   21535 cri.go:89] found id: "07d2ff78db059878fffc6c128c991fcaa07e358737321e30a7ca63865510b349"
	I1018 11:31:43.747518   21535 cri.go:89] found id: "bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb"
	I1018 11:31:43.747521   21535 cri.go:89] found id: "875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3"
	I1018 11:31:43.747524   21535 cri.go:89] found id: "371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a"
	I1018 11:31:43.747532   21535 cri.go:89] found id: "63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb"
	I1018 11:31:43.747537   21535 cri.go:89] found id: "7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660"
	I1018 11:31:43.747542   21535 cri.go:89] found id: "4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92"
	I1018 11:31:43.747549   21535 cri.go:89] found id: "ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51"
	I1018 11:31:43.747553   21535 cri.go:89] found id: "a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956"
	I1018 11:31:43.747558   21535 cri.go:89] found id: ""
	I1018 11:31:43.747610   21535 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 11:31:43.761813   21535 out.go:203] 
	W1018 11:31:43.763360   21535 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 11:31:43.763384   21535 out.go:285] * 
	* 
	W1018 11:31:43.766341   21535 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 11:31:43.767890   21535 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-162665 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.97s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.44s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.776051ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-162665
addons_test.go:332: (dbg) Run:  kubectl --context addons-162665 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162665 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (258.990542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:31:38.800388   20748 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:31:38.800522   20748 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:38.800531   20748 out.go:374] Setting ErrFile to fd 2...
	I1018 11:31:38.800535   20748 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:38.800750   20748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:31:38.801067   20748 mustload.go:65] Loading cluster: addons-162665
	I1018 11:31:38.801399   20748 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:38.801418   20748 addons.go:606] checking whether the cluster is paused
	I1018 11:31:38.801497   20748 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:38.801510   20748 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:31:38.801924   20748 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:31:38.821695   20748 ssh_runner.go:195] Run: systemctl --version
	I1018 11:31:38.821795   20748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:31:38.840708   20748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:31:38.941333   20748 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:31:38.941432   20748 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:31:38.973953   20748 cri.go:89] found id: "488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb"
	I1018 11:31:38.973974   20748 cri.go:89] found id: "a27fdd7026b29e61c0f124b27104ae3956d2aed3110d7b720128e24c0bacc3ec"
	I1018 11:31:38.973977   20748 cri.go:89] found id: "e58b8a219585a9ae96320c366b4c98f0c48358d21f7fb35e348fe8139059d7f9"
	I1018 11:31:38.973981   20748 cri.go:89] found id: "80ee1a432463a8ad3a4376b1f75e176fb6b537149aba4f986e224a7a531ba2b2"
	I1018 11:31:38.973984   20748 cri.go:89] found id: "1c7e5acf2100a7ffae62817db39ede8773b2ec7154e1024f6df4324466851822"
	I1018 11:31:38.973987   20748 cri.go:89] found id: "43a9f95eacc8289c6670fc316e3fc920654dc66aa76a198761a35537e6e3fcec"
	I1018 11:31:38.973989   20748 cri.go:89] found id: "7f162f04036aaf527574c6ac01010e2f827379e18bdc4eaf890380403057279e"
	I1018 11:31:38.973992   20748 cri.go:89] found id: "763f4d62397d6dc0f6a5e51925ddb584fb44a3f2bbed9f528918681dbbd6bef6"
	I1018 11:31:38.973996   20748 cri.go:89] found id: "230e9f4fd374710bc4d70889f01e8c646dbdbed6fe4ac29102ad60f3e1d98d18"
	I1018 11:31:38.974005   20748 cri.go:89] found id: "98ea2b43ee1f985889b32bdfd540789b4f79b7b665ae12fba712166d9fdfd68d"
	I1018 11:31:38.974010   20748 cri.go:89] found id: "c47f2661c734239e8c50f4aef2752bc8c27db6601ea3f442780cbb96bf3187fb"
	I1018 11:31:38.974014   20748 cri.go:89] found id: "7da1e14278c12f7ddce8a0a0317a7585f16e6a2cb0718634ffd628e8b1564fb1"
	I1018 11:31:38.974018   20748 cri.go:89] found id: "03c9856418e49f86ce20ae3c9932b0f0698840f611145c58c7b2d8866d2f1045"
	I1018 11:31:38.974023   20748 cri.go:89] found id: "2d9dfc50ea0d72c6edb7aeb1f80d3aeffcb60ff1588c6aa44fc4a740c0513602"
	I1018 11:31:38.974027   20748 cri.go:89] found id: "f9c877c63013ceff8748532507dbd72e3fc595da82cbcf0558b11733e58c209b"
	I1018 11:31:38.974042   20748 cri.go:89] found id: "07d2ff78db059878fffc6c128c991fcaa07e358737321e30a7ca63865510b349"
	I1018 11:31:38.974052   20748 cri.go:89] found id: "bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb"
	I1018 11:31:38.974058   20748 cri.go:89] found id: "875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3"
	I1018 11:31:38.974062   20748 cri.go:89] found id: "371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a"
	I1018 11:31:38.974065   20748 cri.go:89] found id: "63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb"
	I1018 11:31:38.974071   20748 cri.go:89] found id: "7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660"
	I1018 11:31:38.974076   20748 cri.go:89] found id: "4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92"
	I1018 11:31:38.974079   20748 cri.go:89] found id: "ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51"
	I1018 11:31:38.974081   20748 cri.go:89] found id: "a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956"
	I1018 11:31:38.974083   20748 cri.go:89] found id: ""
	I1018 11:31:38.974129   20748 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 11:31:38.990345   20748 out.go:203] 
	W1018 11:31:38.991714   20748 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 11:31:38.991739   20748 out.go:285] * 
	* 
	W1018 11:31:38.995111   20748 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 11:31:38.996832   20748 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-162665 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-162665 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-162665 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-162665 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [ed244ac9-8791-40d4-b3eb-f206ab18b888] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [ed244ac9-8791-40d4-b3eb-f206ab18b888] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.003969666s
I1018 11:31:42.559648    9360 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162665 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.286954944s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-162665 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-162665
helpers_test.go:243: (dbg) docker inspect addons-162665:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7255d06b4d1908780462c2b650239ed72b8b59a2e1189040336e3fa2fac9f38f",
	        "Created": "2025-10-18T11:29:33.405172816Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11346,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T11:29:33.455561245Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/7255d06b4d1908780462c2b650239ed72b8b59a2e1189040336e3fa2fac9f38f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7255d06b4d1908780462c2b650239ed72b8b59a2e1189040336e3fa2fac9f38f/hostname",
	        "HostsPath": "/var/lib/docker/containers/7255d06b4d1908780462c2b650239ed72b8b59a2e1189040336e3fa2fac9f38f/hosts",
	        "LogPath": "/var/lib/docker/containers/7255d06b4d1908780462c2b650239ed72b8b59a2e1189040336e3fa2fac9f38f/7255d06b4d1908780462c2b650239ed72b8b59a2e1189040336e3fa2fac9f38f-json.log",
	        "Name": "/addons-162665",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-162665:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-162665",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7255d06b4d1908780462c2b650239ed72b8b59a2e1189040336e3fa2fac9f38f",
	                "LowerDir": "/var/lib/docker/overlay2/730abfb8ce2a77240121e1cec64652d711005133a584af9c21d9663ddd02a2cc-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/730abfb8ce2a77240121e1cec64652d711005133a584af9c21d9663ddd02a2cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/730abfb8ce2a77240121e1cec64652d711005133a584af9c21d9663ddd02a2cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/730abfb8ce2a77240121e1cec64652d711005133a584af9c21d9663ddd02a2cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-162665",
	                "Source": "/var/lib/docker/volumes/addons-162665/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-162665",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-162665",
	                "name.minikube.sigs.k8s.io": "addons-162665",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aac1b2db2b31b8e260b2c1c78bffc1a3353fd7e78c0c611ff8d59c7ad8bd9c15",
	            "SandboxKey": "/var/run/docker/netns/aac1b2db2b31",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-162665": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:43:ed:8d:ee:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "601a2ca07e5ff6602239981e74e84e169b74a70321fbdeed94c00633a93b6311",
	                    "EndpointID": "bfc92469d12465a28a8a2951ec0a54cb92c2a831e9cb335da869c131e445089d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-162665",
	                        "7255d06b4d19"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-162665 -n addons-162665
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-162665 logs -n 25: (1.150686724s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-525445 --alsologtostderr --binary-mirror http://127.0.0.1:46875 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-525445 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │                     │
	│ delete  │ -p binary-mirror-525445                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-525445 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ addons  │ enable dashboard -p addons-162665                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-162665                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │                     │
	│ start   │ -p addons-162665 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:31 UTC │
	│ addons  │ addons-162665 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	│ addons  │ addons-162665 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-162665 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	│ addons  │ addons-162665 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	│ addons  │ addons-162665 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	│ addons  │ addons-162665 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-162665                                                                                                                                                                                                                                                                                                                                                                                           │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │ 18 Oct 25 11:31 UTC │
	│ addons  │ addons-162665 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	│ ssh     │ addons-162665 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	│ ip      │ addons-162665 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │ 18 Oct 25 11:31 UTC │
	│ addons  │ addons-162665 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	│ addons  │ addons-162665 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	│ addons  │ addons-162665 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	│ addons  │ addons-162665 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	│ addons  │ addons-162665 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	│ ssh     │ addons-162665 ssh cat /opt/local-path-provisioner/pvc-6d9219d2-3cde-4934-b9fc-1247e93a5f71_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │ 18 Oct 25 11:31 UTC │
	│ addons  │ addons-162665 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	│ addons  │ addons-162665 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:32 UTC │                     │
	│ addons  │ addons-162665 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:32 UTC │                     │
	│ ip      │ addons-162665 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-162665        │ jenkins │ v1.37.0 │ 18 Oct 25 11:33 UTC │ 18 Oct 25 11:33 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 11:29:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 11:29:08.517995   10685 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:29:08.518227   10685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:29:08.518235   10685 out.go:374] Setting ErrFile to fd 2...
	I1018 11:29:08.518239   10685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:29:08.518432   10685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:29:08.518968   10685 out.go:368] Setting JSON to false
	I1018 11:29:08.519711   10685 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":697,"bootTime":1760786252,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 11:29:08.519806   10685 start.go:141] virtualization: kvm guest
	I1018 11:29:08.521741   10685 out.go:179] * [addons-162665] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 11:29:08.522917   10685 notify.go:220] Checking for updates...
	I1018 11:29:08.522957   10685 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 11:29:08.524594   10685 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:29:08.526057   10685 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 11:29:08.527386   10685 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 11:29:08.528849   10685 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 11:29:08.530007   10685 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 11:29:08.531314   10685 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:29:08.553016   10685 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 11:29:08.553102   10685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:29:08.610185   10685 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-18 11:29:08.599830107 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 11:29:08.610287   10685 docker.go:318] overlay module found
	I1018 11:29:08.611978   10685 out.go:179] * Using the docker driver based on user configuration
	I1018 11:29:08.613157   10685 start.go:305] selected driver: docker
	I1018 11:29:08.613173   10685 start.go:925] validating driver "docker" against <nil>
	I1018 11:29:08.613191   10685 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 11:29:08.613708   10685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:29:08.672299   10685 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-18 11:29:08.663075027 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 11:29:08.672494   10685 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 11:29:08.672695   10685 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 11:29:08.674459   10685 out.go:179] * Using Docker driver with root privileges
	I1018 11:29:08.675635   10685 cni.go:84] Creating CNI manager for ""
	I1018 11:29:08.675697   10685 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 11:29:08.675707   10685 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 11:29:08.675792   10685 start.go:349] cluster config:
	{Name:addons-162665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-162665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 11:29:08.677283   10685 out.go:179] * Starting "addons-162665" primary control-plane node in "addons-162665" cluster
	I1018 11:29:08.678603   10685 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 11:29:08.679856   10685 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 11:29:08.681031   10685 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 11:29:08.681075   10685 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 11:29:08.681087   10685 cache.go:58] Caching tarball of preloaded images
	I1018 11:29:08.681139   10685 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 11:29:08.681182   10685 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 11:29:08.681194   10685 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 11:29:08.681549   10685 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/config.json ...
	I1018 11:29:08.681574   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/config.json: {Name:mke74a72cf962e4e13d5f241fc60a68ff68e6d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:08.697060   10685 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 11:29:08.697177   10685 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 11:29:08.697193   10685 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 11:29:08.697197   10685 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 11:29:08.697210   10685 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 11:29:08.697219   10685 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 11:29:21.168308   10685 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 11:29:21.168345   10685 cache.go:232] Successfully downloaded all kic artifacts
	I1018 11:29:21.168420   10685 start.go:360] acquireMachinesLock for addons-162665: {Name:mk4d42d0ef42e24680ba09e77813105e1317a459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 11:29:21.168537   10685 start.go:364] duration metric: took 87.239µs to acquireMachinesLock for "addons-162665"
	I1018 11:29:21.168568   10685 start.go:93] Provisioning new machine with config: &{Name:addons-162665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-162665 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 11:29:21.168643   10685 start.go:125] createHost starting for "" (driver="docker")
	I1018 11:29:21.170597   10685 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 11:29:21.170844   10685 start.go:159] libmachine.API.Create for "addons-162665" (driver="docker")
	I1018 11:29:21.170878   10685 client.go:168] LocalClient.Create starting
	I1018 11:29:21.171019   10685 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem
	I1018 11:29:21.927676   10685 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem
	I1018 11:29:22.136320   10685 cli_runner.go:164] Run: docker network inspect addons-162665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 11:29:22.152986   10685 cli_runner.go:211] docker network inspect addons-162665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 11:29:22.153079   10685 network_create.go:284] running [docker network inspect addons-162665] to gather additional debugging logs...
	I1018 11:29:22.153096   10685 cli_runner.go:164] Run: docker network inspect addons-162665
	W1018 11:29:22.169093   10685 cli_runner.go:211] docker network inspect addons-162665 returned with exit code 1
	I1018 11:29:22.169121   10685 network_create.go:287] error running [docker network inspect addons-162665]: docker network inspect addons-162665: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-162665 not found
	I1018 11:29:22.169137   10685 network_create.go:289] output of [docker network inspect addons-162665]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-162665 not found
	
	** /stderr **
	I1018 11:29:22.169249   10685 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 11:29:22.186125   10685 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ce89a0}
	I1018 11:29:22.186159   10685 network_create.go:124] attempt to create docker network addons-162665 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 11:29:22.186198   10685 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-162665 addons-162665
	I1018 11:29:22.244143   10685 network_create.go:108] docker network addons-162665 192.168.49.0/24 created
	I1018 11:29:22.244181   10685 kic.go:121] calculated static IP "192.168.49.2" for the "addons-162665" container
	I1018 11:29:22.244242   10685 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 11:29:22.260239   10685 cli_runner.go:164] Run: docker volume create addons-162665 --label name.minikube.sigs.k8s.io=addons-162665 --label created_by.minikube.sigs.k8s.io=true
	I1018 11:29:22.277274   10685 oci.go:103] Successfully created a docker volume addons-162665
	I1018 11:29:22.277357   10685 cli_runner.go:164] Run: docker run --rm --name addons-162665-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-162665 --entrypoint /usr/bin/test -v addons-162665:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 11:29:28.949328   10685 cli_runner.go:217] Completed: docker run --rm --name addons-162665-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-162665 --entrypoint /usr/bin/test -v addons-162665:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (6.671922759s)
	I1018 11:29:28.949355   10685 oci.go:107] Successfully prepared a docker volume addons-162665
	I1018 11:29:28.949368   10685 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 11:29:28.949386   10685 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 11:29:28.949434   10685 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-162665:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 11:29:33.334221   10685 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-162665:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.384734791s)
	I1018 11:29:33.334253   10685 kic.go:203] duration metric: took 4.384864975s to extract preloaded images to volume ...
	W1018 11:29:33.334334   10685 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 11:29:33.334367   10685 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 11:29:33.334401   10685 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 11:29:33.389351   10685 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-162665 --name addons-162665 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-162665 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-162665 --network addons-162665 --ip 192.168.49.2 --volume addons-162665:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 11:29:33.712260   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Running}}
	I1018 11:29:33.731670   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:33.750742   10685 cli_runner.go:164] Run: docker exec addons-162665 stat /var/lib/dpkg/alternatives/iptables
	I1018 11:29:33.801386   10685 oci.go:144] the created container "addons-162665" has a running status.
	I1018 11:29:33.801414   10685 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa...
	I1018 11:29:33.962487   10685 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 11:29:33.990901   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:34.009083   10685 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 11:29:34.009103   10685 kic_runner.go:114] Args: [docker exec --privileged addons-162665 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 11:29:34.061624   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:34.079458   10685 machine.go:93] provisionDockerMachine start ...
	I1018 11:29:34.079543   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:34.098903   10685 main.go:141] libmachine: Using SSH client type: native
	I1018 11:29:34.099130   10685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 11:29:34.099145   10685 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 11:29:34.233667   10685 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-162665
	
	I1018 11:29:34.233693   10685 ubuntu.go:182] provisioning hostname "addons-162665"
	I1018 11:29:34.233740   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:34.252465   10685 main.go:141] libmachine: Using SSH client type: native
	I1018 11:29:34.252711   10685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 11:29:34.252734   10685 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-162665 && echo "addons-162665" | sudo tee /etc/hostname
	I1018 11:29:34.394659   10685 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-162665
	
	I1018 11:29:34.394738   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:34.413360   10685 main.go:141] libmachine: Using SSH client type: native
	I1018 11:29:34.413597   10685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 11:29:34.413625   10685 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-162665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-162665/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-162665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 11:29:34.545426   10685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 11:29:34.545457   10685 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 11:29:34.545507   10685 ubuntu.go:190] setting up certificates
	I1018 11:29:34.545520   10685 provision.go:84] configureAuth start
	I1018 11:29:34.545578   10685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-162665
	I1018 11:29:34.562843   10685 provision.go:143] copyHostCerts
	I1018 11:29:34.562909   10685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 11:29:34.563027   10685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 11:29:34.563110   10685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 11:29:34.563168   10685 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.addons-162665 san=[127.0.0.1 192.168.49.2 addons-162665 localhost minikube]
	I1018 11:29:35.074978   10685 provision.go:177] copyRemoteCerts
	I1018 11:29:35.075034   10685 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 11:29:35.075068   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:35.092177   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:35.187939   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 11:29:35.206548   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 11:29:35.223997   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 11:29:35.240789   10685 provision.go:87] duration metric: took 695.256127ms to configureAuth
	I1018 11:29:35.240812   10685 ubuntu.go:206] setting minikube options for container-runtime
	I1018 11:29:35.240989   10685 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:29:35.241123   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:35.258234   10685 main.go:141] libmachine: Using SSH client type: native
	I1018 11:29:35.258474   10685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 11:29:35.258493   10685 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 11:29:35.495425   10685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 11:29:35.495446   10685 machine.go:96] duration metric: took 1.415968808s to provisionDockerMachine
	I1018 11:29:35.495455   10685 client.go:171] duration metric: took 14.324567518s to LocalClient.Create
	I1018 11:29:35.495491   10685 start.go:167] duration metric: took 14.324640696s to libmachine.API.Create "addons-162665"
	I1018 11:29:35.495501   10685 start.go:293] postStartSetup for "addons-162665" (driver="docker")
	I1018 11:29:35.495511   10685 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 11:29:35.495559   10685 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 11:29:35.495588   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:35.513721   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:35.610862   10685 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 11:29:35.614281   10685 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 11:29:35.614315   10685 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 11:29:35.614329   10685 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 11:29:35.614384   10685 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 11:29:35.614408   10685 start.go:296] duration metric: took 118.902307ms for postStartSetup
	I1018 11:29:35.614661   10685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-162665
	I1018 11:29:35.631926   10685 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/config.json ...
	I1018 11:29:35.632186   10685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 11:29:35.632243   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:35.650023   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:35.742102   10685 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 11:29:35.746581   10685 start.go:128] duration metric: took 14.577923254s to createHost
	I1018 11:29:35.746608   10685 start.go:83] releasing machines lock for "addons-162665", held for 14.578054374s
	I1018 11:29:35.746671   10685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-162665
	I1018 11:29:35.764232   10685 ssh_runner.go:195] Run: cat /version.json
	I1018 11:29:35.764276   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:35.764331   10685 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 11:29:35.764387   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:35.783024   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:35.783262   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:35.928218   10685 ssh_runner.go:195] Run: systemctl --version
	I1018 11:29:35.934505   10685 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 11:29:35.968809   10685 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 11:29:35.973630   10685 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 11:29:35.973688   10685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 11:29:36.000070   10685 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 11:29:36.000092   10685 start.go:495] detecting cgroup driver to use...
	I1018 11:29:36.000132   10685 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 11:29:36.000181   10685 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 11:29:36.015711   10685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 11:29:36.027721   10685 docker.go:218] disabling cri-docker service (if available) ...
	I1018 11:29:36.027787   10685 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 11:29:36.044264   10685 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 11:29:36.061680   10685 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 11:29:36.138588   10685 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 11:29:36.221342   10685 docker.go:234] disabling docker service ...
	I1018 11:29:36.221395   10685 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 11:29:36.239646   10685 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 11:29:36.252480   10685 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 11:29:36.334445   10685 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 11:29:36.410171   10685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 11:29:36.422565   10685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 11:29:36.436330   10685 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 11:29:36.436390   10685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:29:36.446211   10685 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 11:29:36.446267   10685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:29:36.454852   10685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:29:36.463674   10685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:29:36.472372   10685 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 11:29:36.480603   10685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:29:36.488955   10685 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:29:36.502656   10685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:29:36.511224   10685 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 11:29:36.518258   10685 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 11:29:36.518338   10685 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 11:29:36.529862   10685 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 11:29:36.537169   10685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 11:29:36.610401   10685 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 11:29:36.711906   10685 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 11:29:36.711969   10685 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 11:29:36.715836   10685 start.go:563] Will wait 60s for crictl version
	I1018 11:29:36.715904   10685 ssh_runner.go:195] Run: which crictl
	I1018 11:29:36.719436   10685 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 11:29:36.742964   10685 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 11:29:36.743121   10685 ssh_runner.go:195] Run: crio --version
	I1018 11:29:36.770082   10685 ssh_runner.go:195] Run: crio --version
	I1018 11:29:36.798787   10685 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 11:29:36.800289   10685 cli_runner.go:164] Run: docker network inspect addons-162665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 11:29:36.816909   10685 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 11:29:36.820931   10685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 11:29:36.831122   10685 kubeadm.go:883] updating cluster {Name:addons-162665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-162665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 11:29:36.831301   10685 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 11:29:36.831372   10685 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 11:29:36.862675   10685 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 11:29:36.862696   10685 crio.go:433] Images already preloaded, skipping extraction
	I1018 11:29:36.862737   10685 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 11:29:36.887399   10685 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 11:29:36.887420   10685 cache_images.go:85] Images are preloaded, skipping loading
	I1018 11:29:36.887429   10685 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 11:29:36.887529   10685 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-162665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-162665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 11:29:36.887601   10685 ssh_runner.go:195] Run: crio config
	I1018 11:29:36.932490   10685 cni.go:84] Creating CNI manager for ""
	I1018 11:29:36.932512   10685 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 11:29:36.932552   10685 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 11:29:36.932579   10685 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-162665 NodeName:addons-162665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 11:29:36.932704   10685 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-162665"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 11:29:36.932781   10685 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 11:29:36.940885   10685 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 11:29:36.940943   10685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 11:29:36.948584   10685 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 11:29:36.961142   10685 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 11:29:36.976455   10685 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1018 11:29:36.989250   10685 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 11:29:36.993010   10685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 11:29:37.002984   10685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 11:29:37.083407   10685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 11:29:37.105193   10685 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665 for IP: 192.168.49.2
	I1018 11:29:37.105212   10685 certs.go:195] generating shared ca certs ...
	I1018 11:29:37.105226   10685 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.105385   10685 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 11:29:37.192357   10685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt ...
	I1018 11:29:37.192385   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt: {Name:mka3ecec2b2aab84aa27b1b0354e5b9efdba318a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.192558   10685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key ...
	I1018 11:29:37.192569   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key: {Name:mk95ba60734f15d990e406b8e853279868b97f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.192641   10685 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 11:29:37.231745   10685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt ...
	I1018 11:29:37.231781   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt: {Name:mkbdfa0d25f46dfa7ffa6b423e0f0cb725223088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.231942   10685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key ...
	I1018 11:29:37.231953   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key: {Name:mkfa8fca55a7201b9fd1abd7bc17b53c0ae00382 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.232021   10685 certs.go:257] generating profile certs ...
	I1018 11:29:37.232069   10685 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.key
	I1018 11:29:37.232083   10685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt with IP's: []
	I1018 11:29:37.419385   10685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt ...
	I1018 11:29:37.419417   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: {Name:mkd8e6e07178e32a6c6afda800f9666e4077ecdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.419574   10685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.key ...
	I1018 11:29:37.419583   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.key: {Name:mkfe4a76a9bdecea041f4abb5ca5f33db085bcdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.419654   10685 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.key.bb988cbf
	I1018 11:29:37.419672   10685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.crt.bb988cbf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 11:29:37.591106   10685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.crt.bb988cbf ...
	I1018 11:29:37.591136   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.crt.bb988cbf: {Name:mkf7ae67e94012cf306ecc751f58fae89e6c3c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.591325   10685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.key.bb988cbf ...
	I1018 11:29:37.591339   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.key.bb988cbf: {Name:mke791e6e1826669245c34107ea153fbe8e2b298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.592351   10685 certs.go:382] copying /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.crt.bb988cbf -> /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.crt
	I1018 11:29:37.592477   10685 certs.go:386] copying /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.key.bb988cbf -> /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.key
	I1018 11:29:37.592535   10685 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/proxy-client.key
	I1018 11:29:37.592554   10685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/proxy-client.crt with IP's: []
	I1018 11:29:37.694911   10685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/proxy-client.crt ...
	I1018 11:29:37.694948   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/proxy-client.crt: {Name:mk0e39fff8885e87a032b546fca4640d3503eea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.695162   10685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/proxy-client.key ...
	I1018 11:29:37.695176   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/proxy-client.key: {Name:mk998fa9a1a5812677e19484993b3cd5927a59a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.695361   10685 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 11:29:37.695407   10685 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 11:29:37.695438   10685 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 11:29:37.695459   10685 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 11:29:37.696009   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 11:29:37.714735   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 11:29:37.732261   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 11:29:37.750589   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 11:29:37.768577   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 11:29:37.786606   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 11:29:37.803713   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 11:29:37.821424   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 11:29:37.839529   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 11:29:37.858533   10685 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 11:29:37.870599   10685 ssh_runner.go:195] Run: openssl version
	I1018 11:29:37.876524   10685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 11:29:37.887240   10685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 11:29:37.890744   10685 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 11:29:37.890816   10685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 11:29:37.924617   10685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 11:29:37.933274   10685 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 11:29:37.936877   10685 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 11:29:37.936918   10685 kubeadm.go:400] StartCluster: {Name:addons-162665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-162665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 11:29:37.936979   10685 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:29:37.937050   10685 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:29:37.962711   10685 cri.go:89] found id: ""
	I1018 11:29:37.962779   10685 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 11:29:37.970775   10685 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 11:29:37.978377   10685 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 11:29:37.978424   10685 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 11:29:37.986012   10685 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 11:29:37.986026   10685 kubeadm.go:157] found existing configuration files:
	
	I1018 11:29:37.986062   10685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 11:29:37.993442   10685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 11:29:37.993501   10685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 11:29:38.000580   10685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 11:29:38.007873   10685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 11:29:38.007943   10685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 11:29:38.014962   10685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 11:29:38.022074   10685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 11:29:38.022120   10685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 11:29:38.029103   10685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 11:29:38.036185   10685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 11:29:38.036237   10685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 11:29:38.043098   10685 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 11:29:38.077372   10685 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 11:29:38.077447   10685 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 11:29:38.097088   10685 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 11:29:38.097151   10685 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 11:29:38.097230   10685 kubeadm.go:318] OS: Linux
	I1018 11:29:38.097337   10685 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 11:29:38.097401   10685 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 11:29:38.097478   10685 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 11:29:38.097543   10685 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 11:29:38.097637   10685 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 11:29:38.097720   10685 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 11:29:38.097801   10685 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 11:29:38.097853   10685 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 11:29:38.152363   10685 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 11:29:38.152533   10685 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 11:29:38.152658   10685 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 11:29:38.159185   10685 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 11:29:38.162876   10685 out.go:252]   - Generating certificates and keys ...
	I1018 11:29:38.162965   10685 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 11:29:38.163035   10685 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 11:29:38.420295   10685 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 11:29:38.743237   10685 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 11:29:38.987498   10685 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 11:29:39.582453   10685 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 11:29:39.873093   10685 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 11:29:39.873205   10685 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-162665 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 11:29:40.131749   10685 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 11:29:40.131928   10685 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-162665 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 11:29:40.189175   10685 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 11:29:40.935217   10685 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 11:29:41.131452   10685 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 11:29:41.131549   10685 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 11:29:41.544386   10685 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 11:29:41.717583   10685 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 11:29:41.953364   10685 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 11:29:42.106975   10685 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 11:29:42.618296   10685 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 11:29:42.618839   10685 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 11:29:42.623723   10685 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 11:29:42.625608   10685 out.go:252]   - Booting up control plane ...
	I1018 11:29:42.625711   10685 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 11:29:42.625787   10685 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 11:29:42.625841   10685 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 11:29:42.638348   10685 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 11:29:42.638463   10685 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 11:29:42.644577   10685 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 11:29:42.644799   10685 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 11:29:42.644869   10685 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 11:29:42.739861   10685 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 11:29:42.740020   10685 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 11:29:43.741491   10685 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001860835s
	I1018 11:29:43.744811   10685 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 11:29:43.744925   10685 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 11:29:43.745085   10685 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 11:29:43.745198   10685 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 11:29:44.918780   10685 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.173820007s
	I1018 11:29:45.714886   10685 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.969826217s
	I1018 11:29:47.246944   10685 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501985337s
	I1018 11:29:47.257236   10685 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 11:29:47.268493   10685 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 11:29:47.277317   10685 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 11:29:47.277665   10685 kubeadm.go:318] [mark-control-plane] Marking the node addons-162665 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 11:29:47.285528   10685 kubeadm.go:318] [bootstrap-token] Using token: cvvifb.r3a9yrawhzc3ilo4
	I1018 11:29:47.286919   10685 out.go:252]   - Configuring RBAC rules ...
	I1018 11:29:47.287082   10685 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 11:29:47.290590   10685 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 11:29:47.295358   10685 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 11:29:47.297475   10685 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 11:29:47.299623   10685 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 11:29:47.301859   10685 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 11:29:47.653107   10685 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 11:29:48.068071   10685 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 11:29:48.653041   10685 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 11:29:48.654028   10685 kubeadm.go:318] 
	I1018 11:29:48.654115   10685 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 11:29:48.654129   10685 kubeadm.go:318] 
	I1018 11:29:48.654194   10685 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 11:29:48.654200   10685 kubeadm.go:318] 
	I1018 11:29:48.654220   10685 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 11:29:48.654282   10685 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 11:29:48.654333   10685 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 11:29:48.654342   10685 kubeadm.go:318] 
	I1018 11:29:48.654399   10685 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 11:29:48.654406   10685 kubeadm.go:318] 
	I1018 11:29:48.654443   10685 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 11:29:48.654449   10685 kubeadm.go:318] 
	I1018 11:29:48.654516   10685 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 11:29:48.654613   10685 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 11:29:48.654726   10685 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 11:29:48.654736   10685 kubeadm.go:318] 
	I1018 11:29:48.654895   10685 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 11:29:48.654994   10685 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 11:29:48.655005   10685 kubeadm.go:318] 
	I1018 11:29:48.655113   10685 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token cvvifb.r3a9yrawhzc3ilo4 \
	I1018 11:29:48.655247   10685 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d \
	I1018 11:29:48.655290   10685 kubeadm.go:318] 	--control-plane 
	I1018 11:29:48.655298   10685 kubeadm.go:318] 
	I1018 11:29:48.655398   10685 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 11:29:48.655408   10685 kubeadm.go:318] 
	I1018 11:29:48.655522   10685 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token cvvifb.r3a9yrawhzc3ilo4 \
	I1018 11:29:48.655707   10685 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d 
	I1018 11:29:48.657603   10685 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 11:29:48.657738   10685 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 11:29:48.657782   10685 cni.go:84] Creating CNI manager for ""
	I1018 11:29:48.657803   10685 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 11:29:48.660549   10685 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 11:29:48.661863   10685 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 11:29:48.666044   10685 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 11:29:48.666061   10685 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 11:29:48.678922   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 11:29:48.880742   10685 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 11:29:48.880852   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:48.880903   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-162665 minikube.k8s.io/updated_at=2025_10_18T11_29_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=addons-162665 minikube.k8s.io/primary=true
	I1018 11:29:48.962022   10685 ops.go:34] apiserver oom_adj: -16
	I1018 11:29:48.962162   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:49.462404   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:49.962252   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:50.463063   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:50.962776   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:51.462840   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:51.962602   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:52.462454   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:52.962453   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:53.463032   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:53.962613   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:54.022742   10685 kubeadm.go:1113] duration metric: took 5.141937663s to wait for elevateKubeSystemPrivileges
	I1018 11:29:54.022788   10685 kubeadm.go:402] duration metric: took 16.085872206s to StartCluster
	I1018 11:29:54.022809   10685 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:54.022921   10685 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 11:29:54.023436   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:54.023653   10685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 11:29:54.023662   10685 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 11:29:54.023725   10685 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 11:29:54.023869   10685 addons.go:69] Setting yakd=true in profile "addons-162665"
	I1018 11:29:54.023876   10685 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:29:54.023887   10685 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-162665"
	I1018 11:29:54.023900   10685 addons.go:69] Setting metrics-server=true in profile "addons-162665"
	I1018 11:29:54.023878   10685 addons.go:69] Setting inspektor-gadget=true in profile "addons-162665"
	I1018 11:29:54.023916   10685 addons.go:69] Setting storage-provisioner=true in profile "addons-162665"
	I1018 11:29:54.023921   10685 addons.go:69] Setting ingress-dns=true in profile "addons-162665"
	I1018 11:29:54.023927   10685 addons.go:238] Setting addon inspektor-gadget=true in "addons-162665"
	I1018 11:29:54.023936   10685 addons.go:69] Setting default-storageclass=true in profile "addons-162665"
	I1018 11:29:54.023953   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.023958   10685 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-162665"
	I1018 11:29:54.023960   10685 addons.go:238] Setting addon ingress-dns=true in "addons-162665"
	I1018 11:29:54.023972   10685 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-162665"
	I1018 11:29:54.023954   10685 addons.go:69] Setting ingress=true in profile "addons-162665"
	I1018 11:29:54.023983   10685 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-162665"
	I1018 11:29:54.024001   10685 addons.go:238] Setting addon ingress=true in "addons-162665"
	I1018 11:29:54.024016   10685 addons.go:69] Setting registry=true in profile "addons-162665"
	I1018 11:29:54.024051   10685 addons.go:69] Setting volumesnapshots=true in profile "addons-162665"
	I1018 11:29:54.024055   10685 addons.go:238] Setting addon registry=true in "addons-162665"
	I1018 11:29:54.024062   10685 addons.go:238] Setting addon volumesnapshots=true in "addons-162665"
	I1018 11:29:54.024070   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.023949   10685 addons.go:238] Setting addon metrics-server=true in "addons-162665"
	I1018 11:29:54.024081   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.024090   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.024100   10685 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-162665"
	I1018 11:29:54.024130   10685 addons.go:69] Setting cloud-spanner=true in profile "addons-162665"
	I1018 11:29:54.024148   10685 addons.go:238] Setting addon cloud-spanner=true in "addons-162665"
	I1018 11:29:54.024151   10685 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-162665"
	I1018 11:29:54.024164   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.024182   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.023887   10685 addons.go:69] Setting registry-creds=true in profile "addons-162665"
	I1018 11:29:54.024239   10685 addons.go:238] Setting addon registry-creds=true in "addons-162665"
	I1018 11:29:54.024259   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.024363   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.024509   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.024554   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.024562   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.024574   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.024598   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.024636   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.024844   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.023906   10685 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-162665"
	I1018 11:29:54.025008   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.023914   10685 addons.go:69] Setting gcp-auth=true in profile "addons-162665"
	I1018 11:29:54.025198   10685 mustload.go:65] Loading cluster: addons-162665
	I1018 11:29:54.023906   10685 addons.go:238] Setting addon yakd=true in "addons-162665"
	I1018 11:29:54.025380   10685 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:29:54.025384   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.025572   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.025622   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.024033   10685 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-162665"
	I1018 11:29:54.025911   10685 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-162665"
	I1018 11:29:54.024043   10685 addons.go:69] Setting volcano=true in profile "addons-162665"
	I1018 11:29:54.025951   10685 addons.go:238] Setting addon volcano=true in "addons-162665"
	I1018 11:29:54.025978   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.026111   10685 out.go:179] * Verifying Kubernetes components...
	I1018 11:29:54.024071   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.024001   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.024024   10685 addons.go:238] Setting addon storage-provisioner=true in "addons-162665"
	I1018 11:29:54.026708   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.024001   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.027722   10685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 11:29:54.032391   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.032453   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.035737   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.035748   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.036125   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.036820   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.038043   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.081482   10685 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 11:29:54.093872   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.094713   10685 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 11:29:54.094745   10685 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 11:29:54.095195   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.099169   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 11:29:54.100666   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 11:29:54.102497   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 11:29:54.102552   10685 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 11:29:54.103989   10685 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 11:29:54.104048   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.105272   10685 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 11:29:54.106485   10685 addons.go:238] Setting addon default-storageclass=true in "addons-162665"
	I1018 11:29:54.106538   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.107141   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.109079   10685 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 11:29:54.113494   10685 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 11:29:54.115627   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 11:29:54.115688   10685 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 11:29:54.117136   10685 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 11:29:54.119268   10685 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 11:29:54.119289   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 11:29:54.119348   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.119934   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 11:29:54.120039   10685 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 11:29:54.120268   10685 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 11:29:54.120289   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 11:29:54.120351   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.122011   10685 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 11:29:54.122087   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 11:29:54.122136   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.123264   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 11:29:54.126904   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 11:29:54.128333   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 11:29:54.130246   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 11:29:54.131994   10685 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 11:29:54.132042   10685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 11:29:54.132130   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.138316   10685 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 11:29:54.138618   10685 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 11:29:54.140803   10685 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 11:29:54.141154   10685 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 11:29:54.141170   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 11:29:54.141224   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.141577   10685 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 11:29:54.141592   10685 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 11:29:54.141639   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.142554   10685 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 11:29:54.142570   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 11:29:54.142613   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	W1018 11:29:54.146840   10685 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 11:29:54.147943   10685 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-162665"
	I1018 11:29:54.147987   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.148493   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.151299   10685 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 11:29:54.151368   10685 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 11:29:54.152611   10685 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 11:29:54.152636   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 11:29:54.152686   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.153055   10685 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 11:29:54.153066   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 11:29:54.153115   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.153798   10685 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 11:29:54.155447   10685 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 11:29:54.155472   10685 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 11:29:54.155524   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.158224   10685 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 11:29:54.166352   10685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 11:29:54.166637   10685 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 11:29:54.166654   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 11:29:54.166707   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.182939   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.190029   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.190863   10685 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 11:29:54.190902   10685 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 11:29:54.190952   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.195285   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.195302   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.199413   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.208561   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.211055   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.212440   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.222695   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.227494   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.230471   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.233911   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.236439   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.244418   10685 out.go:179]   - Using image docker.io/busybox:stable
	I1018 11:29:54.249482   10685 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 11:29:54.250986   10685 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 11:29:54.251010   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 11:29:54.251067   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.255049   10685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 11:29:54.261892   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.289231   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.358544   10685 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:29:54.358564   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 11:29:54.368569   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 11:29:54.377323   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:29:54.384544   10685 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 11:29:54.384566   10685 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 11:29:54.401674   10685 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 11:29:54.401706   10685 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 11:29:54.407502   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 11:29:54.412861   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 11:29:54.415158   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 11:29:54.416517   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 11:29:54.423058   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 11:29:54.429842   10685 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 11:29:54.429869   10685 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 11:29:54.431958   10685 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 11:29:54.431984   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 11:29:54.433070   10685 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 11:29:54.433091   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 11:29:54.441833   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 11:29:54.444520   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 11:29:54.447371   10685 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 11:29:54.447393   10685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 11:29:54.455397   10685 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 11:29:54.455425   10685 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 11:29:54.455600   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 11:29:54.464528   10685 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 11:29:54.464608   10685 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 11:29:54.474654   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 11:29:54.477976   10685 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 11:29:54.478000   10685 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 11:29:54.479732   10685 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 11:29:54.479776   10685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 11:29:54.505295   10685 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 11:29:54.505333   10685 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 11:29:54.520556   10685 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 11:29:54.520585   10685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 11:29:54.533287   10685 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 11:29:54.533317   10685 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 11:29:54.547303   10685 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 11:29:54.547336   10685 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 11:29:54.548478   10685 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 11:29:54.548503   10685 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 11:29:54.555623   10685 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 11:29:54.555647   10685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 11:29:54.583743   10685 node_ready.go:35] waiting up to 6m0s for node "addons-162665" to be "Ready" ...
	I1018 11:29:54.583885   10685 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 11:29:54.593623   10685 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 11:29:54.593648   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 11:29:54.594386   10685 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 11:29:54.594407   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 11:29:54.610478   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 11:29:54.645098   10685 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 11:29:54.645129   10685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 11:29:54.665806   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 11:29:54.681705   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 11:29:54.734633   10685 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 11:29:54.734660   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 11:29:54.800354   10685 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 11:29:54.800386   10685 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 11:29:54.870857   10685 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 11:29:54.870879   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 11:29:54.944961   10685 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 11:29:54.945004   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 11:29:55.000006   10685 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 11:29:55.000039   10685 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 11:29:55.039519   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 11:29:55.091796   10685 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-162665" context rescaled to 1 replicas
	W1018 11:29:55.318663   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:29:55.318714   10685 retry.go:31] will retry after 164.778691ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:29:55.483869   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:29:55.621916   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.205360759s)
	I1018 11:29:55.621953   10685 addons.go:479] Verifying addon ingress=true in "addons-162665"
	I1018 11:29:55.621970   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.180116155s)
	I1018 11:29:55.622153   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.177604939s)
	I1018 11:29:55.622216   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.166593387s)
	I1018 11:29:55.621911   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.198816793s)
	I1018 11:29:55.622276   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.147592807s)
	I1018 11:29:55.622291   10685 addons.go:479] Verifying addon registry=true in "addons-162665"
	I1018 11:29:55.622338   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.011826066s)
	I1018 11:29:55.622353   10685 addons.go:479] Verifying addon metrics-server=true in "addons-162665"
	I1018 11:29:55.623419   10685 out.go:179] * Verifying registry addon...
	I1018 11:29:55.623439   10685 out.go:179] * Verifying ingress addon...
	I1018 11:29:55.626594   10685 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 11:29:55.626597   10685 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1018 11:29:55.629971   10685 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1018 11:29:55.630133   10685 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 11:29:55.630148   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:29:55.630749   10685 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 11:29:55.630783   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:29:56.070581   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.404725186s)
	W1018 11:29:56.070639   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 11:29:56.070646   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.388820125s)
	I1018 11:29:56.070662   10685 retry.go:31] will retry after 254.577179ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 11:29:56.070939   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.031376956s)
	I1018 11:29:56.070964   10685 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-162665"
	I1018 11:29:56.072508   10685 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 11:29:56.072508   10685 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-162665 service yakd-dashboard -n yakd-dashboard
	
	I1018 11:29:56.074694   10685 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 11:29:56.077777   10685 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 11:29:56.077794   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:29:56.179422   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:29:56.179610   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 11:29:56.199174   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:29:56.199207   10685 retry.go:31] will retry after 435.672465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:29:56.325871   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 11:29:56.577655   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:29:56.586902   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:29:56.629337   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:29:56.629505   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:29:56.635479   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:29:57.077708   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:29:57.177788   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:29:57.177915   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:29:57.577944   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:29:57.629925   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:29:57.630114   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:29:58.078018   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:29:58.129512   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:29:58.129671   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:29:58.577916   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:29:58.587050   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:29:58.629393   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:29:58.629627   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:29:58.793715   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.467788899s)
	I1018 11:29:58.793739   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.158233194s)
	W1018 11:29:58.793777   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:29:58.793798   10685 retry.go:31] will retry after 507.850372ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:29:59.077406   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:29:59.178016   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:29:59.178189   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:29:59.302400   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:29:59.578560   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:29:59.629880   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:29:59.629957   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 11:29:59.827300   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:29:59.827331   10685 retry.go:31] will retry after 552.636093ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:00.078562   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:00.179571   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:00.179804   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:00.380193   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:00.578201   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:00.629214   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:00.629448   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 11:30:00.909299   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:00.909330   10685 retry.go:31] will retry after 1.024281319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:01.078311   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:01.086247   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:01.178695   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:01.178785   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:01.578548   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:01.629374   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:01.629494   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:01.709358   10685 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 11:30:01.709435   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:30:01.728175   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:30:01.830372   10685 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 11:30:01.843867   10685 addons.go:238] Setting addon gcp-auth=true in "addons-162665"
	I1018 11:30:01.843915   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:30:01.844336   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:30:01.862838   10685 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 11:30:01.862893   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:30:01.879818   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:30:01.934052   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:02.078501   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:02.129906   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:02.130068   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 11:30:02.467999   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:02.468027   10685 retry.go:31] will retry after 1.246367926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:02.470364   10685 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 11:30:02.471728   10685 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 11:30:02.472876   10685 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 11:30:02.472894   10685 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 11:30:02.486493   10685 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 11:30:02.486514   10685 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 11:30:02.499907   10685 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 11:30:02.499931   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 11:30:02.512905   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 11:30:02.577631   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:02.629833   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:02.630082   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:02.815867   10685 addons.go:479] Verifying addon gcp-auth=true in "addons-162665"
	I1018 11:30:02.817243   10685 out.go:179] * Verifying gcp-auth addon...
	I1018 11:30:02.819528   10685 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 11:30:02.821713   10685 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 11:30:02.821730   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:03.077909   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:03.086974   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:03.178458   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:03.178597   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:03.322436   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:03.577324   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:03.632325   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:03.632547   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:03.714535   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:03.822858   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:04.079024   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:04.130228   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:04.130406   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 11:30:04.245658   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:04.245690   10685 retry.go:31] will retry after 2.529964576s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:04.322019   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:04.577719   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:04.629212   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:04.629615   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:04.822886   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:05.077970   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:05.087113   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:05.129649   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:05.129712   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:05.322027   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:05.577941   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:05.629348   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:05.629499   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:05.822996   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:06.077645   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:06.130100   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:06.130138   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:06.322569   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:06.577605   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:06.630080   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:06.630121   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:06.776303   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:06.822096   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:07.078503   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:07.129967   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:07.130123   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 11:30:07.296343   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:07.296380   10685 retry.go:31] will retry after 4.158681311s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:07.323060   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:07.577912   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:07.586141   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:07.630081   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:07.630328   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:07.822649   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:08.077379   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:08.130341   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:08.130387   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:08.323125   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:08.577915   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:08.629372   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:08.629611   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:08.822117   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:09.078122   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:09.129872   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:09.129944   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:09.322364   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:09.578241   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:09.586391   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:09.629850   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:09.629999   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:09.822613   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:10.077549   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:10.129190   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:10.129346   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:10.322997   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:10.577579   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:10.629650   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:10.629753   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:10.822246   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:11.078298   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:11.129994   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:11.130035   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:11.322981   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:11.456227   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:11.578256   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:11.586561   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:11.630737   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:11.631095   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:11.821756   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 11:30:11.986324   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:11.986354   10685 retry.go:31] will retry after 4.005862643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:12.077700   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:12.129592   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:12.129616   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:12.321991   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:12.577855   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:12.629446   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:12.629541   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:12.823022   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:13.078151   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:13.130153   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:13.130411   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:13.322685   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:13.577627   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:13.586818   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:13.629359   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:13.629489   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:13.821898   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:14.077249   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:14.129122   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:14.129213   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:14.322802   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:14.577360   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:14.629003   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:14.629238   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:14.822802   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:15.077694   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:15.129965   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:15.130009   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:15.322499   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:15.578072   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:15.587154   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:15.629517   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:15.629664   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:15.821837   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:15.992973   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:16.078179   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:16.129892   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:16.130074   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:16.322870   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 11:30:16.527260   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:16.527295   10685 retry.go:31] will retry after 8.183681212s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:16.577988   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:16.629885   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:16.629968   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:16.822136   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:17.078184   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:17.129426   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:17.129596   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:17.322182   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:17.577724   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:17.629524   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:17.629739   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:17.821953   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:18.077740   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:18.086938   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:18.129610   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:18.129846   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:18.321937   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:18.578081   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:18.630092   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:18.630153   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:18.823073   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:19.078170   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:19.129926   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:19.130022   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:19.322444   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:19.578266   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:19.629677   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:19.629858   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:19.822188   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:20.077909   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:20.087077   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:20.129619   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:20.129733   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:20.322189   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:20.577976   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:20.629385   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:20.629569   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:20.821976   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:21.077838   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:21.129256   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:21.129478   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:21.322699   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:21.581091   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:21.630104   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:21.630207   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:21.822770   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:22.077348   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:22.129838   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:22.129891   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:22.322415   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:22.578056   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:22.586348   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:22.629633   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:22.629806   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:22.822549   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:23.078257   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:23.130134   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:23.130322   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:23.322600   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:23.577552   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:23.629570   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:23.629631   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:23.821974   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:24.077697   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:24.129751   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:24.129819   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:24.322398   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:24.578294   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:24.586430   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:24.629902   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:24.630000   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:24.712006   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:24.823227   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:25.077509   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:25.129821   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:25.130032   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 11:30:25.239871   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:25.239902   10685 retry.go:31] will retry after 21.38616268s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:25.322395   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:25.578176   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:25.629673   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:25.629933   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:25.822526   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:26.077851   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:26.129620   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:26.129919   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:26.321992   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:26.577678   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:26.586872   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:26.629265   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:26.629446   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:26.822996   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:27.077866   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:27.129638   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:27.129868   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:27.322561   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:27.577249   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:27.630107   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:27.630242   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:27.822741   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:28.077280   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:28.129975   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:28.130187   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:28.322620   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:28.577625   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:28.587117   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:28.631544   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:28.631908   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:28.822289   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:29.077707   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:29.131295   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:29.131443   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:29.322949   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:29.577692   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:29.629656   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:29.629717   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:29.822206   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:30.077860   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:30.129804   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:30.129936   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:30.322537   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:30.577996   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:30.629674   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:30.629806   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:30.822385   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:31.078197   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:31.086372   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:31.129746   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:31.129964   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:31.322308   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:31.577947   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:31.629792   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:31.629897   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:31.822805   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:32.077438   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:32.130288   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:32.130447   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:32.323084   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:32.578053   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:32.629865   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:32.629906   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:32.822796   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:33.077426   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:33.088504   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:33.129958   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:33.130156   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:33.322630   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:33.577453   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:33.629164   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:33.629322   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:33.822715   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:34.077534   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:34.129602   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:34.129682   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:34.322143   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:34.577993   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:34.629558   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:34.629851   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:34.822300   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:35.078058   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:35.129739   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:35.129871   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:35.324276   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:35.578227   10685 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 11:30:35.578247   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:35.586137   10685 node_ready.go:49] node "addons-162665" is "Ready"
	I1018 11:30:35.586165   10685 node_ready.go:38] duration metric: took 41.002375212s for node "addons-162665" to be "Ready" ...
	I1018 11:30:35.586180   10685 api_server.go:52] waiting for apiserver process to appear ...
	I1018 11:30:35.586233   10685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 11:30:35.601873   10685 api_server.go:72] duration metric: took 41.57817834s to wait for apiserver process to appear ...
	I1018 11:30:35.601909   10685 api_server.go:88] waiting for apiserver healthz status ...
	I1018 11:30:35.601930   10685 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 11:30:35.606743   10685 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 11:30:35.607741   10685 api_server.go:141] control plane version: v1.34.1
	I1018 11:30:35.607774   10685 api_server.go:131] duration metric: took 5.857346ms to wait for apiserver health ...
	I1018 11:30:35.607787   10685 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 11:30:35.679473   10685 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 11:30:35.679500   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:35.681032   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:35.682041   10685 system_pods.go:59] 20 kube-system pods found
	I1018 11:30:35.682076   10685 system_pods.go:61] "amd-gpu-device-plugin-qtz57" [7718c757-52e9-4c21-8387-b22e46dbd672] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 11:30:35.682086   10685 system_pods.go:61] "coredns-66bc5c9577-dd8db" [9e860bf0-8080-4685-be57-8e4372d70758] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 11:30:35.682100   10685 system_pods.go:61] "csi-hostpath-attacher-0" [808c9abd-09ef-4a82-a9b0-40e0b5583c62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 11:30:35.682108   10685 system_pods.go:61] "csi-hostpath-resizer-0" [5fc9ea30-c6c5-4b52-801e-6f6744fcb45b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 11:30:35.682117   10685 system_pods.go:61] "csi-hostpathplugin-vd8h9" [8084337b-ce37-4904-b2d8-f9d98bec885a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 11:30:35.682122   10685 system_pods.go:61] "etcd-addons-162665" [985d8d51-a9b4-4613-8496-616cbbc9ba77] Running
	I1018 11:30:35.682127   10685 system_pods.go:61] "kindnet-chh44" [c8dd40f2-5d47-4163-a0f5-b4a42c683205] Running
	I1018 11:30:35.682132   10685 system_pods.go:61] "kube-apiserver-addons-162665" [b0263b5e-10dd-451f-a711-eafcf586b058] Running
	I1018 11:30:35.682136   10685 system_pods.go:61] "kube-controller-manager-addons-162665" [602b205c-f553-44c4-b952-749da212d7fc] Running
	I1018 11:30:35.682144   10685 system_pods.go:61] "kube-ingress-dns-minikube" [448dbfd9-bfeb-46dd-b9d4-8223a2d0208b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 11:30:35.682151   10685 system_pods.go:61] "kube-proxy-952nl" [d7c98ee8-f772-4ace-9296-8ed60510d4c6] Running
	I1018 11:30:35.682156   10685 system_pods.go:61] "kube-scheduler-addons-162665" [ad5158d7-dd62-4cf1-b936-323a01c48bea] Running
	I1018 11:30:35.682164   10685 system_pods.go:61] "metrics-server-85b7d694d7-4fbgz" [7862dfcb-3720-49c5-a912-e836d1598eaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 11:30:35.682172   10685 system_pods.go:61] "nvidia-device-plugin-daemonset-l95vf" [4c8e1e2a-6ab0-4cde-8847-b7cdf5b01ab4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 11:30:35.682181   10685 system_pods.go:61] "registry-6b586f9694-8ns6k" [c800a208-4e00-4ea5-bacc-ab4677684b88] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 11:30:35.682190   10685 system_pods.go:61] "registry-creds-764b6fb674-hx56w" [b711b8e2-3d97-490b-bb1b-e5272a73c7bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 11:30:35.682199   10685 system_pods.go:61] "registry-proxy-tsk7w" [34d517d6-de7d-42f2-88d2-ae400f0fce9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 11:30:35.682221   10685 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mhxbb" [e43d99f8-e9e2-4f3b-9b80-7b05e4c365db] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 11:30:35.682231   10685 system_pods.go:61] "snapshot-controller-7d9fbc56b8-q4cgf" [f5e34437-83ad-4871-83fc-22cf1c594cc6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 11:30:35.682238   10685 system_pods.go:61] "storage-provisioner" [757a0a21-65a5-42b5-8599-5bad27d50df7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 11:30:35.682247   10685 system_pods.go:74] duration metric: took 74.451132ms to wait for pod list to return data ...
	I1018 11:30:35.682258   10685 default_sa.go:34] waiting for default service account to be created ...
	I1018 11:30:35.687383   10685 default_sa.go:45] found service account: "default"
	I1018 11:30:35.687416   10685 default_sa.go:55] duration metric: took 5.15054ms for default service account to be created ...
	I1018 11:30:35.687428   10685 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 11:30:35.781236   10685 system_pods.go:86] 20 kube-system pods found
	I1018 11:30:35.781268   10685 system_pods.go:89] "amd-gpu-device-plugin-qtz57" [7718c757-52e9-4c21-8387-b22e46dbd672] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 11:30:35.781275   10685 system_pods.go:89] "coredns-66bc5c9577-dd8db" [9e860bf0-8080-4685-be57-8e4372d70758] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 11:30:35.781281   10685 system_pods.go:89] "csi-hostpath-attacher-0" [808c9abd-09ef-4a82-a9b0-40e0b5583c62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 11:30:35.781287   10685 system_pods.go:89] "csi-hostpath-resizer-0" [5fc9ea30-c6c5-4b52-801e-6f6744fcb45b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 11:30:35.781292   10685 system_pods.go:89] "csi-hostpathplugin-vd8h9" [8084337b-ce37-4904-b2d8-f9d98bec885a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 11:30:35.781297   10685 system_pods.go:89] "etcd-addons-162665" [985d8d51-a9b4-4613-8496-616cbbc9ba77] Running
	I1018 11:30:35.781302   10685 system_pods.go:89] "kindnet-chh44" [c8dd40f2-5d47-4163-a0f5-b4a42c683205] Running
	I1018 11:30:35.781308   10685 system_pods.go:89] "kube-apiserver-addons-162665" [b0263b5e-10dd-451f-a711-eafcf586b058] Running
	I1018 11:30:35.781311   10685 system_pods.go:89] "kube-controller-manager-addons-162665" [602b205c-f553-44c4-b952-749da212d7fc] Running
	I1018 11:30:35.781317   10685 system_pods.go:89] "kube-ingress-dns-minikube" [448dbfd9-bfeb-46dd-b9d4-8223a2d0208b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 11:30:35.781320   10685 system_pods.go:89] "kube-proxy-952nl" [d7c98ee8-f772-4ace-9296-8ed60510d4c6] Running
	I1018 11:30:35.781324   10685 system_pods.go:89] "kube-scheduler-addons-162665" [ad5158d7-dd62-4cf1-b936-323a01c48bea] Running
	I1018 11:30:35.781330   10685 system_pods.go:89] "metrics-server-85b7d694d7-4fbgz" [7862dfcb-3720-49c5-a912-e836d1598eaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 11:30:35.781343   10685 system_pods.go:89] "nvidia-device-plugin-daemonset-l95vf" [4c8e1e2a-6ab0-4cde-8847-b7cdf5b01ab4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 11:30:35.781350   10685 system_pods.go:89] "registry-6b586f9694-8ns6k" [c800a208-4e00-4ea5-bacc-ab4677684b88] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 11:30:35.781357   10685 system_pods.go:89] "registry-creds-764b6fb674-hx56w" [b711b8e2-3d97-490b-bb1b-e5272a73c7bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 11:30:35.781369   10685 system_pods.go:89] "registry-proxy-tsk7w" [34d517d6-de7d-42f2-88d2-ae400f0fce9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 11:30:35.781380   10685 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mhxbb" [e43d99f8-e9e2-4f3b-9b80-7b05e4c365db] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 11:30:35.781393   10685 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q4cgf" [f5e34437-83ad-4871-83fc-22cf1c594cc6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 11:30:35.781400   10685 system_pods.go:89] "storage-provisioner" [757a0a21-65a5-42b5-8599-5bad27d50df7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 11:30:35.781420   10685 retry.go:31] will retry after 284.500839ms: missing components: kube-dns
	I1018 11:30:35.821951   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:36.070792   10685 system_pods.go:86] 20 kube-system pods found
	I1018 11:30:36.070832   10685 system_pods.go:89] "amd-gpu-device-plugin-qtz57" [7718c757-52e9-4c21-8387-b22e46dbd672] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 11:30:36.070841   10685 system_pods.go:89] "coredns-66bc5c9577-dd8db" [9e860bf0-8080-4685-be57-8e4372d70758] Running
	I1018 11:30:36.070860   10685 system_pods.go:89] "csi-hostpath-attacher-0" [808c9abd-09ef-4a82-a9b0-40e0b5583c62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 11:30:36.070870   10685 system_pods.go:89] "csi-hostpath-resizer-0" [5fc9ea30-c6c5-4b52-801e-6f6744fcb45b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 11:30:36.070884   10685 system_pods.go:89] "csi-hostpathplugin-vd8h9" [8084337b-ce37-4904-b2d8-f9d98bec885a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 11:30:36.070893   10685 system_pods.go:89] "etcd-addons-162665" [985d8d51-a9b4-4613-8496-616cbbc9ba77] Running
	I1018 11:30:36.070899   10685 system_pods.go:89] "kindnet-chh44" [c8dd40f2-5d47-4163-a0f5-b4a42c683205] Running
	I1018 11:30:36.070903   10685 system_pods.go:89] "kube-apiserver-addons-162665" [b0263b5e-10dd-451f-a711-eafcf586b058] Running
	I1018 11:30:36.070912   10685 system_pods.go:89] "kube-controller-manager-addons-162665" [602b205c-f553-44c4-b952-749da212d7fc] Running
	I1018 11:30:36.070923   10685 system_pods.go:89] "kube-ingress-dns-minikube" [448dbfd9-bfeb-46dd-b9d4-8223a2d0208b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 11:30:36.070932   10685 system_pods.go:89] "kube-proxy-952nl" [d7c98ee8-f772-4ace-9296-8ed60510d4c6] Running
	I1018 11:30:36.070938   10685 system_pods.go:89] "kube-scheduler-addons-162665" [ad5158d7-dd62-4cf1-b936-323a01c48bea] Running
	I1018 11:30:36.070945   10685 system_pods.go:89] "metrics-server-85b7d694d7-4fbgz" [7862dfcb-3720-49c5-a912-e836d1598eaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 11:30:36.070960   10685 system_pods.go:89] "nvidia-device-plugin-daemonset-l95vf" [4c8e1e2a-6ab0-4cde-8847-b7cdf5b01ab4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 11:30:36.070969   10685 system_pods.go:89] "registry-6b586f9694-8ns6k" [c800a208-4e00-4ea5-bacc-ab4677684b88] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 11:30:36.070977   10685 system_pods.go:89] "registry-creds-764b6fb674-hx56w" [b711b8e2-3d97-490b-bb1b-e5272a73c7bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 11:30:36.070984   10685 system_pods.go:89] "registry-proxy-tsk7w" [34d517d6-de7d-42f2-88d2-ae400f0fce9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 11:30:36.070991   10685 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mhxbb" [e43d99f8-e9e2-4f3b-9b80-7b05e4c365db] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 11:30:36.071000   10685 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q4cgf" [f5e34437-83ad-4871-83fc-22cf1c594cc6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 11:30:36.071005   10685 system_pods.go:89] "storage-provisioner" [757a0a21-65a5-42b5-8599-5bad27d50df7] Running
	I1018 11:30:36.071017   10685 system_pods.go:126] duration metric: took 383.58023ms to wait for k8s-apps to be running ...
	I1018 11:30:36.071030   10685 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 11:30:36.071080   10685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 11:30:36.079343   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:36.088246   10685 system_svc.go:56] duration metric: took 17.204463ms WaitForService to wait for kubelet
	I1018 11:30:36.088283   10685 kubeadm.go:586] duration metric: took 42.064592936s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 11:30:36.088307   10685 node_conditions.go:102] verifying NodePressure condition ...
	I1018 11:30:36.091198   10685 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 11:30:36.091250   10685 node_conditions.go:123] node cpu capacity is 8
	I1018 11:30:36.091267   10685 node_conditions.go:105] duration metric: took 2.954423ms to run NodePressure ...
	I1018 11:30:36.091283   10685 start.go:241] waiting for startup goroutines ...
	I1018 11:30:36.130785   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:36.130988   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:36.322887   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:36.577678   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:36.630472   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:36.630752   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:36.823191   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:37.078035   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:37.129547   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:37.129591   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:37.322573   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:37.578800   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:37.630491   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:37.630514   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:37.825714   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:38.078187   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:38.129797   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:38.130659   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:38.324232   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:38.578433   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:38.630580   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:38.630705   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:38.823693   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:39.078059   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:39.178397   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:39.178433   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:39.321916   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:39.577694   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:39.630903   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:39.631084   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:39.824620   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:40.079381   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:40.131312   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:40.132700   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:40.322442   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:40.578748   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:40.630493   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:40.630571   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:40.823511   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:41.078169   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:41.130219   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:41.130324   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:41.322936   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:41.577432   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:41.630398   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:41.630419   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:41.823631   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:42.077942   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:42.129479   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:42.129522   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:42.323306   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:42.578690   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:42.630474   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:42.630916   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:42.822639   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:43.199474   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:43.199658   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:43.199799   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:43.328046   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:43.578451   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:43.631691   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:43.631728   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:43.823343   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:44.077860   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:44.130715   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:44.130749   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:44.322640   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:44.578127   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:44.630002   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:44.630026   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:44.822903   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:45.078100   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:45.178834   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:45.178934   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:45.323128   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:45.578853   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:45.630514   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:45.630524   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:45.823552   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:46.078548   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:46.179819   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:46.179881   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:46.322986   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:46.578398   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:46.626472   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:46.629796   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:46.629862   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:46.822305   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:47.079932   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:47.184752   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:47.184752   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:47.322873   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 11:30:47.335115   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:47.335147   10685 retry.go:31] will retry after 13.56763526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:47.579105   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:47.629698   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:47.629853   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:47.823624   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:48.078054   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:48.129823   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:48.129844   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:48.322068   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:48.578104   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:48.629663   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:48.629692   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:48.822018   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:49.078990   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:49.132982   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:49.133154   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:49.323573   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:49.579047   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:49.632107   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:49.632836   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:49.825441   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:50.078566   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:50.131079   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:50.131184   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:50.322949   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:50.578291   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:50.630697   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:50.630994   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:50.822308   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:51.079541   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:51.130752   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:51.131330   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:51.323656   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:51.577656   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:51.630673   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:51.630800   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:51.823252   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:52.078593   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:52.130661   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:52.130678   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:52.322438   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:52.629355   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:52.629411   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:52.629506   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:52.823412   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:53.078846   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:53.130718   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:53.130843   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:53.322236   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:53.578517   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:53.630284   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:53.630465   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:53.823741   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:54.078098   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:54.130357   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:54.130530   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:54.322315   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:54.578446   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:54.630325   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:54.630505   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:54.823310   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:55.078498   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:55.130430   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:55.130474   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:55.323020   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:55.578328   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:55.629837   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:55.629932   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:55.822563   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:56.077596   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:56.129951   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:56.130113   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:56.322914   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:56.577895   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:56.629800   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:56.629888   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:56.822465   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:57.078541   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:57.177081   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:57.177177   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:57.349648   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:57.578175   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:57.630378   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:57.630681   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:57.822631   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:58.078505   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:58.131215   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:58.131720   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:58.322934   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:58.579108   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:58.630032   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:58.630094   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:58.822629   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:59.079054   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:59.130546   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:59.130584   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:59.361021   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:59.578103   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:59.679102   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:59.679209   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:59.822535   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:00.079153   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:00.130026   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:00.130073   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:00.323623   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:00.577924   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:00.629539   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:00.629565   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:00.823110   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:00.903177   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:31:01.078646   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:01.130607   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:01.130666   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:01.323258   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:01.577951   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:31:01.586146   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:31:01.586181   10685 retry.go:31] will retry after 16.904479278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:31:01.630257   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:01.630304   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:01.823153   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:02.078259   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:02.129689   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:02.129847   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:02.322532   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:02.578689   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:02.630337   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:02.630351   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:02.823408   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:03.078868   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:03.129295   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:03.129335   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:03.323701   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:03.578485   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:03.678683   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:03.678723   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:03.823251   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:04.080878   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:04.130948   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:04.131775   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:04.335517   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:04.579485   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:04.631325   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:04.631331   10685 kapi.go:107] duration metric: took 1m9.004733027s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 11:31:04.822175   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:05.078698   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:05.129887   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:05.323177   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:05.579177   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:05.630046   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:05.822685   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:06.078027   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:06.130100   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:06.322836   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:06.577072   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:06.629629   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:06.822157   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:07.078472   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:07.130559   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:07.322569   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:07.581138   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:07.629859   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:07.826970   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:08.078356   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:08.130409   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:08.323368   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:08.578751   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:08.630941   10685 kapi.go:107] duration metric: took 1m13.004341926s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 11:31:08.975652   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:09.106753   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:09.322332   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:09.578257   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:09.823087   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:10.078698   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:10.324310   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:10.578297   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:10.822876   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:11.078384   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:11.323411   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:11.578864   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:11.822644   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:12.077620   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:12.323656   10685 kapi.go:107] duration metric: took 1m9.504126071s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 11:31:12.326649   10685 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-162665 cluster.
	I1018 11:31:12.328108   10685 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 11:31:12.330055   10685 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 11:31:12.579338   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:13.078821   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:13.578399   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:14.078273   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:14.577472   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:15.078191   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:15.578071   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:16.078729   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:16.578732   10685 kapi.go:107] duration metric: took 1m20.504038466s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 11:31:18.492530   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 11:31:19.018892   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 11:31:19.018996   10685 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 11:31:19.020983   10685 out.go:179] * Enabled addons: registry-creds, ingress-dns, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, nvidia-device-plugin, metrics-server, default-storageclass, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1018 11:31:19.022142   10685 addons.go:514] duration metric: took 1m24.998418872s for enable addons: enabled=[registry-creds ingress-dns amd-gpu-device-plugin storage-provisioner cloud-spanner nvidia-device-plugin metrics-server default-storageclass yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1018 11:31:19.022178   10685 start.go:246] waiting for cluster config update ...
	I1018 11:31:19.022199   10685 start.go:255] writing updated cluster config ...
	I1018 11:31:19.022445   10685 ssh_runner.go:195] Run: rm -f paused
	I1018 11:31:19.026326   10685 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 11:31:19.029476   10685 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dd8db" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:19.033303   10685 pod_ready.go:94] pod "coredns-66bc5c9577-dd8db" is "Ready"
	I1018 11:31:19.033330   10685 pod_ready.go:86] duration metric: took 3.836571ms for pod "coredns-66bc5c9577-dd8db" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:19.035007   10685 pod_ready.go:83] waiting for pod "etcd-addons-162665" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:19.038206   10685 pod_ready.go:94] pod "etcd-addons-162665" is "Ready"
	I1018 11:31:19.038224   10685 pod_ready.go:86] duration metric: took 3.199968ms for pod "etcd-addons-162665" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:19.039930   10685 pod_ready.go:83] waiting for pod "kube-apiserver-addons-162665" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:19.043251   10685 pod_ready.go:94] pod "kube-apiserver-addons-162665" is "Ready"
	I1018 11:31:19.043270   10685 pod_ready.go:86] duration metric: took 3.322227ms for pod "kube-apiserver-addons-162665" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:19.044906   10685 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-162665" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:19.430249   10685 pod_ready.go:94] pod "kube-controller-manager-addons-162665" is "Ready"
	I1018 11:31:19.430282   10685 pod_ready.go:86] duration metric: took 385.356512ms for pod "kube-controller-manager-addons-162665" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:19.630475   10685 pod_ready.go:83] waiting for pod "kube-proxy-952nl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:20.030063   10685 pod_ready.go:94] pod "kube-proxy-952nl" is "Ready"
	I1018 11:31:20.030092   10685 pod_ready.go:86] duration metric: took 399.586435ms for pod "kube-proxy-952nl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:20.230308   10685 pod_ready.go:83] waiting for pod "kube-scheduler-addons-162665" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:20.629921   10685 pod_ready.go:94] pod "kube-scheduler-addons-162665" is "Ready"
	I1018 11:31:20.629945   10685 pod_ready.go:86] duration metric: took 399.610694ms for pod "kube-scheduler-addons-162665" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:20.629956   10685 pod_ready.go:40] duration metric: took 1.603609293s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 11:31:20.673677   10685 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 11:31:20.675723   10685 out.go:179] * Done! kubectl is now configured to use "addons-162665" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 11:33:57 addons-162665 crio[773]: time="2025-10-18T11:33:57.285920662Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-rqgc8/POD" id=f7390217-19e3-420c-9f57-03209b757561 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 11:33:57 addons-162665 crio[773]: time="2025-10-18T11:33:57.28602787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 11:33:57 addons-162665 crio[773]: time="2025-10-18T11:33:57.292371874Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-rqgc8 Namespace:default ID:425be14e3d786496811c68eef37beda9be97acfa05d84971c516573c1d36f0de UID:404c6add-caf5-4f54-b2b3-0359ba9d9aef NetNS:/var/run/netns/33c87092-58bf-4054-982e-528d0e0e7fd3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128ba0}] Aliases:map[]}"
	Oct 18 11:33:57 addons-162665 crio[773]: time="2025-10-18T11:33:57.292406722Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-rqgc8 to CNI network \"kindnet\" (type=ptp)"
	Oct 18 11:33:57 addons-162665 crio[773]: time="2025-10-18T11:33:57.302856251Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-rqgc8 Namespace:default ID:425be14e3d786496811c68eef37beda9be97acfa05d84971c516573c1d36f0de UID:404c6add-caf5-4f54-b2b3-0359ba9d9aef NetNS:/var/run/netns/33c87092-58bf-4054-982e-528d0e0e7fd3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128ba0}] Aliases:map[]}"
	Oct 18 11:33:57 addons-162665 crio[773]: time="2025-10-18T11:33:57.303008027Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-rqgc8 for CNI network kindnet (type=ptp)"
	Oct 18 11:33:57 addons-162665 crio[773]: time="2025-10-18T11:33:57.303835414Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 11:33:57 addons-162665 crio[773]: time="2025-10-18T11:33:57.30465889Z" level=info msg="Ran pod sandbox 425be14e3d786496811c68eef37beda9be97acfa05d84971c516573c1d36f0de with infra container: default/hello-world-app-5d498dc89-rqgc8/POD" id=f7390217-19e3-420c-9f57-03209b757561 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 11:33:57 addons-162665 crio[773]: time="2025-10-18T11:33:57.306001053Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=1476e95c-2628-4a6b-888a-c594e813349d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 11:33:57 addons-162665 crio[773]: time="2025-10-18T11:33:57.306168838Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=1476e95c-2628-4a6b-888a-c594e813349d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 11:33:57 addons-162665 crio[773]: time="2025-10-18T11:33:57.306235945Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=1476e95c-2628-4a6b-888a-c594e813349d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 11:33:57 addons-162665 crio[773]: time="2025-10-18T11:33:57.306918869Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=aa6140ca-99ee-4deb-9dab-2593537a3572 name=/runtime.v1.ImageService/PullImage
	Oct 18 11:33:57 addons-162665 crio[773]: time="2025-10-18T11:33:57.312351104Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 18 11:33:58 addons-162665 crio[773]: time="2025-10-18T11:33:58.252295275Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=aa6140ca-99ee-4deb-9dab-2593537a3572 name=/runtime.v1.ImageService/PullImage
	Oct 18 11:33:58 addons-162665 crio[773]: time="2025-10-18T11:33:58.252856067Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=50f45583-018c-4adb-959d-f0654fbb517c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 11:33:58 addons-162665 crio[773]: time="2025-10-18T11:33:58.254405558Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=399d501a-369c-41d1-bc70-9c514e877f7e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 11:33:58 addons-162665 crio[773]: time="2025-10-18T11:33:58.257995594Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-rqgc8/hello-world-app" id=2ab0d969-1226-4948-8ad7-aaddc71e8aa9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 11:33:58 addons-162665 crio[773]: time="2025-10-18T11:33:58.258631984Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 11:33:58 addons-162665 crio[773]: time="2025-10-18T11:33:58.264037852Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 11:33:58 addons-162665 crio[773]: time="2025-10-18T11:33:58.264252696Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9870a68dd5002c848bcda7d3d19fb0f06e49d21faad2475e3dfa0a61c2358f01/merged/etc/passwd: no such file or directory"
	Oct 18 11:33:58 addons-162665 crio[773]: time="2025-10-18T11:33:58.264286146Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9870a68dd5002c848bcda7d3d19fb0f06e49d21faad2475e3dfa0a61c2358f01/merged/etc/group: no such file or directory"
	Oct 18 11:33:58 addons-162665 crio[773]: time="2025-10-18T11:33:58.264588348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 11:33:58 addons-162665 crio[773]: time="2025-10-18T11:33:58.297421431Z" level=info msg="Created container a898258c6a61caf12806eda83cce7eaff5480e49ec8b1b316e8143691bb68765: default/hello-world-app-5d498dc89-rqgc8/hello-world-app" id=2ab0d969-1226-4948-8ad7-aaddc71e8aa9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 11:33:58 addons-162665 crio[773]: time="2025-10-18T11:33:58.298099918Z" level=info msg="Starting container: a898258c6a61caf12806eda83cce7eaff5480e49ec8b1b316e8143691bb68765" id=d8a2a4e1-17da-4c9c-81c4-0215210c3795 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 11:33:58 addons-162665 crio[773]: time="2025-10-18T11:33:58.299873954Z" level=info msg="Started container" PID=9915 containerID=a898258c6a61caf12806eda83cce7eaff5480e49ec8b1b316e8143691bb68765 description=default/hello-world-app-5d498dc89-rqgc8/hello-world-app id=d8a2a4e1-17da-4c9c-81c4-0215210c3795 name=/runtime.v1.RuntimeService/StartContainer sandboxID=425be14e3d786496811c68eef37beda9be97acfa05d84971c516573c1d36f0de
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	a898258c6a61c       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   425be14e3d786       hello-world-app-5d498dc89-rqgc8             default
	ff53e54600e12       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   e43c08ceda637       registry-creds-764b6fb674-hx56w             kube-system
	a0994d26cf100       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago            Running             nginx                                    0                   607989d1b74e7       nginx                                       default
	993a2b10e2026       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   b5469d09f8566       busybox                                     default
	488c15000b978       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   2fd2354519459       csi-hostpathplugin-vd8h9                    kube-system
	a27fdd7026b29       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   2fd2354519459       csi-hostpathplugin-vd8h9                    kube-system
	e58b8a219585a       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   2fd2354519459       csi-hostpathplugin-vd8h9                    kube-system
	80ee1a432463a       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   2fd2354519459       csi-hostpathplugin-vd8h9                    kube-system
	d539fd7cbcbbe       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   7c6d96b73cbd1       gcp-auth-78565c9fb4-kr9d8                   gcp-auth
	1c7e5acf2100a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   2fd2354519459       csi-hostpathplugin-vd8h9                    kube-system
	46ebf17b2eaaa       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            2 minutes ago            Running             gadget                                   0                   9c27e42afb04e       gadget-vscpb                                gadget
	fe24ec6bccde8       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             2 minutes ago            Running             controller                               0                   36ca410debc3e       ingress-nginx-controller-675c5ddd98-splxz   ingress-nginx
	43a9f95eacc82       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   e5d84b0a13043       registry-proxy-tsk7w                        kube-system
	7f162f04036aa       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              2 minutes ago            Running             csi-resizer                              0                   e45622ea7a09f       csi-hostpath-resizer-0                      kube-system
	763f4d62397d6       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago            Running             csi-external-health-monitor-controller   0                   2fd2354519459       csi-hostpathplugin-vd8h9                    kube-system
	2ab0798158fad       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              2 minutes ago            Running             yakd                                     0                   c383cce8bc50d       yakd-dashboard-5ff678cb9-8jpkg              yakd-dashboard
	230e9f4fd3747       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   406ae14baf268       csi-hostpath-attacher-0                     kube-system
	98ea2b43ee1f9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   d8d2220e2dc31       snapshot-controller-7d9fbc56b8-mhxbb        kube-system
	6055994c2f9ad       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              patch                                    0                   bdfe716adc399       ingress-nginx-admission-patch-d4dp5         ingress-nginx
	c47f2661c7342       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   528e1befe732d       nvidia-device-plugin-daemonset-l95vf        kube-system
	7da1e14278c12       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   3c07f98cb1613       amd-gpu-device-plugin-qtz57                 kube-system
	03c9856418e49       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   03d1a2af1b7a0       snapshot-controller-7d9fbc56b8-q4cgf        kube-system
	66eeb7fe3345b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              create                                   0                   26df9f77bbc31       ingress-nginx-admission-create-g2s9g        ingress-nginx
	2d9dfc50ea0d7       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   4fb3295698524       metrics-server-85b7d694d7-4fbgz             kube-system
	86f0ff52ac8ce       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   01402d2be55e1       local-path-provisioner-648f6765c9-mrfgl     local-path-storage
	f9c877c63013c       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   0d41833c8a2fb       kube-ingress-dns-minikube                   kube-system
	24f62efb65dfc       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago            Running             cloud-spanner-emulator                   0                   3aa84b61ab0a1       cloud-spanner-emulator-86bd5cbb97-rmg8m     default
	07d2ff78db059       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   15d2ba2abafd7       registry-6b586f9694-8ns6k                   kube-system
	bfb31922272c5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   529e8cc60ef3c       coredns-66bc5c9577-dd8db                    kube-system
	875e77b7948ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   818084b37bc78       storage-provisioner                         kube-system
	371ec5ccac551       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   77f155ba37ace       kube-proxy-952nl                            kube-system
	63d2fc63799c7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   d2964eaabd9f2       kindnet-chh44                               kube-system
	7c7aa4df8e12b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   21b89fefafe32       kube-controller-manager-addons-162665       kube-system
	4b7561783145a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   410373435ed89       kube-apiserver-addons-162665                kube-system
	ba7d02bd6b761       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   d3bcb0bdaaf12       kube-scheduler-addons-162665                kube-system
	a0d7b2076afe9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   a0763b46d9953       etcd-addons-162665                          kube-system
	
	
	==> coredns [bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb] <==
	[INFO] 10.244.0.22:53097 - 55888 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.006304502s
	[INFO] 10.244.0.22:40575 - 41268 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006828671s
	[INFO] 10.244.0.22:42250 - 56787 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007615483s
	[INFO] 10.244.0.22:53693 - 8454 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004945003s
	[INFO] 10.244.0.22:37256 - 50028 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00523981s
	[INFO] 10.244.0.22:52193 - 242 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002444029s
	[INFO] 10.244.0.22:42223 - 43498 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002708791s
	[INFO] 10.244.0.25:40766 - 44374 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000185038s
	[INFO] 10.244.0.25:34956 - 27371 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000251946s
	[INFO] 10.244.0.25:50427 - 14734 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000111656s
	[INFO] 10.244.0.25:42671 - 26509 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000179684s
	[INFO] 10.244.0.25:44598 - 22372 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000130506s
	[INFO] 10.244.0.25:43907 - 49770 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000184265s
	[INFO] 10.244.0.25:59750 - 56891 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.004513866s
	[INFO] 10.244.0.25:56676 - 11765 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.004793882s
	[INFO] 10.244.0.25:38432 - 2789 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.00548125s
	[INFO] 10.244.0.25:60854 - 59323 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.005807751s
	[INFO] 10.244.0.25:36318 - 17706 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004777629s
	[INFO] 10.244.0.25:59520 - 15176 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005942756s
	[INFO] 10.244.0.25:41986 - 28897 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005467544s
	[INFO] 10.244.0.25:55273 - 32340 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005965577s
	[INFO] 10.244.0.25:34693 - 36755 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001723306s
	[INFO] 10.244.0.25:37498 - 27202 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001794388s
	[INFO] 10.244.0.26:56728 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000203098s
	[INFO] 10.244.0.26:44221 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000302247s
	
	
	==> describe nodes <==
	Name:               addons-162665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-162665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=addons-162665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T11_29_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-162665
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-162665"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 11:29:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-162665
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 11:33:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 11:33:32 +0000   Sat, 18 Oct 2025 11:29:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 11:33:32 +0000   Sat, 18 Oct 2025 11:29:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 11:33:32 +0000   Sat, 18 Oct 2025 11:29:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 11:33:32 +0000   Sat, 18 Oct 2025 11:30:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-162665
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                7f3dd06e-c800-4da1-b5f5-24431ef08e12
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  default                     cloud-spanner-emulator-86bd5cbb97-rmg8m      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  default                     hello-world-app-5d498dc89-rqgc8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-vscpb                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  gcp-auth                    gcp-auth-78565c9fb4-kr9d8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-splxz    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m3s
	  kube-system                 amd-gpu-device-plugin-qtz57                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  kube-system                 coredns-66bc5c9577-dd8db                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m5s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 csi-hostpathplugin-vd8h9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  kube-system                 etcd-addons-162665                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m11s
	  kube-system                 kindnet-chh44                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m5s
	  kube-system                 kube-apiserver-addons-162665                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-controller-manager-addons-162665        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-proxy-952nl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-addons-162665                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 metrics-server-85b7d694d7-4fbgz              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m3s
	  kube-system                 nvidia-device-plugin-daemonset-l95vf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  kube-system                 registry-6b586f9694-8ns6k                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 registry-creds-764b6fb674-hx56w              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 registry-proxy-tsk7w                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  kube-system                 snapshot-controller-7d9fbc56b8-mhxbb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 snapshot-controller-7d9fbc56b8-q4cgf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  local-path-storage          local-path-provisioner-648f6765c9-mrfgl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-8jpkg               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  Starting                 4m11s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s  kubelet          Node addons-162665 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s  kubelet          Node addons-162665 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s  kubelet          Node addons-162665 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m6s   node-controller  Node addons-162665 event: Registered Node addons-162665 in Controller
	  Normal  NodeReady                3m23s  kubelet          Node addons-162665 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.098201] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.055601] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.500112] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 11:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.040343] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.023874] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.023998] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.023847] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +2.047856] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +4.031738] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[Oct18 11:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[ +16.382621] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[ +32.253751] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	
	
	==> etcd [a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956] <==
	{"level":"warn","ts":"2025-10-18T11:29:45.180311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.186397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.192356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.198566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.204693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.211629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.218208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.225265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.232340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.246110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.253286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.259477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.311206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:56.547650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:56.553789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:30:22.710978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:30:22.738048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:30:43.197695Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.221783ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T11:30:43.198059Z","caller":"traceutil/trace.go:172","msg":"trace[1690408968] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:966; }","duration":"121.607288ms","start":"2025-10-18T11:30:43.076435Z","end":"2025-10-18T11:30:43.198043Z","steps":["trace[1690408968] 'range keys from in-memory index tree'  (duration: 121.142547ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T11:30:52.627498Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.779294ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040713988101891 > lease_revoke:<id:70cc99f7152c55bc>","response":"size:29"}
	{"level":"info","ts":"2025-10-18T11:30:57.347936Z","caller":"traceutil/trace.go:172","msg":"trace[1705074662] transaction","detail":"{read_only:false; response_revision:1047; number_of_response:1; }","duration":"171.263528ms","start":"2025-10-18T11:30:57.176652Z","end":"2025-10-18T11:30:57.347916Z","steps":["trace[1705074662] 'process raft request'  (duration: 144.517198ms)","trace[1705074662] 'compare'  (duration: 26.639881ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T11:30:57.347966Z","caller":"traceutil/trace.go:172","msg":"trace[1291800020] transaction","detail":"{read_only:false; response_revision:1048; number_of_response:1; }","duration":"167.803525ms","start":"2025-10-18T11:30:57.180147Z","end":"2025-10-18T11:30:57.347950Z","steps":["trace[1291800020] 'process raft request'  (duration: 167.755638ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T11:31:08.973952Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.342241ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T11:31:08.974016Z","caller":"traceutil/trace.go:172","msg":"trace[255875423] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1138; }","duration":"151.413196ms","start":"2025-10-18T11:31:08.822587Z","end":"2025-10-18T11:31:08.974000Z","steps":["trace[255875423] 'agreement among raft nodes before linearized reading'  (duration: 58.301056ms)","trace[255875423] 'range keys from in-memory index tree'  (duration: 93.008116ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T11:31:08.974109Z","caller":"traceutil/trace.go:172","msg":"trace[1475923364] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"149.520352ms","start":"2025-10-18T11:31:08.824573Z","end":"2025-10-18T11:31:08.974093Z","steps":["trace[1475923364] 'process raft request'  (duration: 56.359522ms)","trace[1475923364] 'compare'  (duration: 93.028727ms)"],"step_count":2}
	
	
	==> gcp-auth [d539fd7cbcbbe623dd11ed18b85907089bc31258e45ad6360d0dcb7f28bb0cb5] <==
	2025/10/18 11:31:12 GCP Auth Webhook started!
	2025/10/18 11:31:20 Ready to marshal response ...
	2025/10/18 11:31:20 Ready to write response ...
	2025/10/18 11:31:21 Ready to marshal response ...
	2025/10/18 11:31:21 Ready to write response ...
	2025/10/18 11:31:21 Ready to marshal response ...
	2025/10/18 11:31:21 Ready to write response ...
	2025/10/18 11:31:35 Ready to marshal response ...
	2025/10/18 11:31:35 Ready to write response ...
	2025/10/18 11:31:39 Ready to marshal response ...
	2025/10/18 11:31:39 Ready to write response ...
	2025/10/18 11:31:44 Ready to marshal response ...
	2025/10/18 11:31:44 Ready to write response ...
	2025/10/18 11:31:49 Ready to marshal response ...
	2025/10/18 11:31:49 Ready to write response ...
	2025/10/18 11:31:49 Ready to marshal response ...
	2025/10/18 11:31:49 Ready to write response ...
	2025/10/18 11:31:56 Ready to marshal response ...
	2025/10/18 11:31:56 Ready to write response ...
	2025/10/18 11:31:59 Ready to marshal response ...
	2025/10/18 11:31:59 Ready to write response ...
	2025/10/18 11:33:56 Ready to marshal response ...
	2025/10/18 11:33:56 Ready to write response ...
	
	
	==> kernel <==
	 11:33:58 up 16 min,  0 user,  load average: 0.43, 0.69, 0.35
	Linux addons-162665 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb] <==
	I1018 11:31:54.787414       1 main.go:301] handling current node
	I1018 11:32:04.789351       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:32:04.789385       1 main.go:301] handling current node
	I1018 11:32:14.792903       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:32:14.792946       1 main.go:301] handling current node
	I1018 11:32:24.788838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:32:24.788869       1 main.go:301] handling current node
	I1018 11:32:34.793219       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:32:34.793254       1 main.go:301] handling current node
	I1018 11:32:44.795125       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:32:44.795163       1 main.go:301] handling current node
	I1018 11:32:54.796154       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:32:54.796183       1 main.go:301] handling current node
	I1018 11:33:04.793324       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:33:04.793370       1 main.go:301] handling current node
	I1018 11:33:14.788233       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:33:14.788279       1 main.go:301] handling current node
	I1018 11:33:24.796313       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:33:24.796348       1 main.go:301] handling current node
	I1018 11:33:34.787967       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:33:34.787996       1 main.go:301] handling current node
	I1018 11:33:44.787870       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:33:44.787901       1 main.go:301] handling current node
	I1018 11:33:54.795907       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:33:54.795935       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92] <==
	W1018 11:30:22.731505       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 11:30:22.737969       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 11:30:35.279995       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.24.1:443: connect: connection refused
	E1018 11:30:35.280062       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.24.1:443: connect: connection refused" logger="UnhandledError"
	W1018 11:30:35.280102       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.24.1:443: connect: connection refused
	E1018 11:30:35.280128       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.24.1:443: connect: connection refused" logger="UnhandledError"
	W1018 11:30:35.299474       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.24.1:443: connect: connection refused
	E1018 11:30:35.299512       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.24.1:443: connect: connection refused" logger="UnhandledError"
	W1018 11:30:35.300701       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.24.1:443: connect: connection refused
	E1018 11:30:35.300737       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.24.1:443: connect: connection refused" logger="UnhandledError"
	W1018 11:30:47.115978       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 11:30:47.116084       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 11:30:47.116135       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.71.1:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.71.1:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.71.1:443: connect: connection refused" logger="UnhandledError"
	E1018 11:30:47.117968       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.71.1:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.71.1:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.71.1:443: connect: connection refused" logger="UnhandledError"
	E1018 11:30:47.123476       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.71.1:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.71.1:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.71.1:443: connect: connection refused" logger="UnhandledError"
	I1018 11:30:47.179510       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 11:31:29.413107       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50808: use of closed network connection
	E1018 11:31:29.575189       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50842: use of closed network connection
	I1018 11:31:35.352643       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 11:31:35.543036       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.69.112"}
	I1018 11:31:54.172753       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1018 11:33:57.053627       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.90.85"}
	
	
	==> kube-controller-manager [7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660] <==
	I1018 11:29:52.690443       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 11:29:52.690459       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 11:29:52.690740       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 11:29:52.690753       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 11:29:52.694001       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 11:29:52.694067       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 11:29:52.694102       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 11:29:52.694109       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 11:29:52.694113       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 11:29:52.694243       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 11:29:52.697372       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 11:29:52.700112       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-162665" podCIDRs=["10.244.0.0/24"]
	I1018 11:29:52.704226       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 11:29:52.704248       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 11:29:52.704287       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 11:29:52.705451       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 11:29:52.712824       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 11:30:22.699196       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 11:30:22.699331       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 11:30:22.699376       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 11:30:22.721936       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 11:30:22.725540       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 11:30:22.800507       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 11:30:22.826160       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 11:30:37.656533       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a] <==
	I1018 11:29:54.372926       1 server_linux.go:53] "Using iptables proxy"
	I1018 11:29:54.484662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 11:29:54.585290       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 11:29:54.585359       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 11:29:54.591159       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 11:29:55.078207       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 11:29:55.078290       1 server_linux.go:132] "Using iptables Proxier"
	I1018 11:29:55.119783       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 11:29:55.130172       1 server.go:527] "Version info" version="v1.34.1"
	I1018 11:29:55.130484       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 11:29:55.133600       1 config.go:200] "Starting service config controller"
	I1018 11:29:55.134894       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 11:29:55.134139       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 11:29:55.135050       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 11:29:55.134590       1 config.go:309] "Starting node config controller"
	I1018 11:29:55.135130       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 11:29:55.135154       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 11:29:55.134131       1 config.go:106] "Starting endpoint slice config controller"
	I1018 11:29:55.135197       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 11:29:55.236096       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 11:29:55.236155       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 11:29:55.236502       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51] <==
	E1018 11:29:45.712709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 11:29:45.712729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 11:29:45.712825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 11:29:45.712851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 11:29:45.712896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 11:29:45.712919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 11:29:45.712925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 11:29:45.712973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 11:29:45.712512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 11:29:45.713099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 11:29:45.713205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 11:29:45.713243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 11:29:45.713254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 11:29:45.713254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 11:29:45.713313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 11:29:45.713315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 11:29:46.529864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 11:29:46.700575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 11:29:46.761603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 11:29:46.809654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 11:29:46.821912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 11:29:46.835420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 11:29:46.851019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 11:29:47.065253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1018 11:29:49.910751       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 11:32:07 addons-162665 kubelet[1309]: I1018 11:32:07.194068    1309 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/559278b1-d034-44c3-a3e6-a0418bfb688b-gcp-creds\") pod \"559278b1-d034-44c3-a3e6-a0418bfb688b\" (UID: \"559278b1-d034-44c3-a3e6-a0418bfb688b\") "
	Oct 18 11:32:07 addons-162665 kubelet[1309]: I1018 11:32:07.194109    1309 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2mqb\" (UniqueName: \"kubernetes.io/projected/559278b1-d034-44c3-a3e6-a0418bfb688b-kube-api-access-z2mqb\") pod \"559278b1-d034-44c3-a3e6-a0418bfb688b\" (UID: \"559278b1-d034-44c3-a3e6-a0418bfb688b\") "
	Oct 18 11:32:07 addons-162665 kubelet[1309]: I1018 11:32:07.194162    1309 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/559278b1-d034-44c3-a3e6-a0418bfb688b-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "559278b1-d034-44c3-a3e6-a0418bfb688b" (UID: "559278b1-d034-44c3-a3e6-a0418bfb688b"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 18 11:32:07 addons-162665 kubelet[1309]: I1018 11:32:07.194272    1309 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/559278b1-d034-44c3-a3e6-a0418bfb688b-gcp-creds\") on node \"addons-162665\" DevicePath \"\""
	Oct 18 11:32:07 addons-162665 kubelet[1309]: I1018 11:32:07.196351    1309 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/559278b1-d034-44c3-a3e6-a0418bfb688b-kube-api-access-z2mqb" (OuterVolumeSpecName: "kube-api-access-z2mqb") pod "559278b1-d034-44c3-a3e6-a0418bfb688b" (UID: "559278b1-d034-44c3-a3e6-a0418bfb688b"). InnerVolumeSpecName "kube-api-access-z2mqb". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 18 11:32:07 addons-162665 kubelet[1309]: I1018 11:32:07.197275    1309 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^0f00ba25-ac16-11f0-b5e3-0ad7199da480" (OuterVolumeSpecName: "task-pv-storage") pod "559278b1-d034-44c3-a3e6-a0418bfb688b" (UID: "559278b1-d034-44c3-a3e6-a0418bfb688b"). InnerVolumeSpecName "pvc-2624c7a0-e816-455e-9acd-3164a3ffcf24". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 18 11:32:07 addons-162665 kubelet[1309]: I1018 11:32:07.294549    1309 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z2mqb\" (UniqueName: \"kubernetes.io/projected/559278b1-d034-44c3-a3e6-a0418bfb688b-kube-api-access-z2mqb\") on node \"addons-162665\" DevicePath \"\""
	Oct 18 11:32:07 addons-162665 kubelet[1309]: I1018 11:32:07.294610    1309 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-2624c7a0-e816-455e-9acd-3164a3ffcf24\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^0f00ba25-ac16-11f0-b5e3-0ad7199da480\") on node \"addons-162665\" "
	Oct 18 11:32:07 addons-162665 kubelet[1309]: I1018 11:32:07.299029    1309 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-2624c7a0-e816-455e-9acd-3164a3ffcf24" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^0f00ba25-ac16-11f0-b5e3-0ad7199da480") on node "addons-162665"
	Oct 18 11:32:07 addons-162665 kubelet[1309]: I1018 11:32:07.395779    1309 reconciler_common.go:299] "Volume detached for volume \"pvc-2624c7a0-e816-455e-9acd-3164a3ffcf24\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^0f00ba25-ac16-11f0-b5e3-0ad7199da480\") on node \"addons-162665\" DevicePath \"\""
	Oct 18 11:32:07 addons-162665 kubelet[1309]: I1018 11:32:07.473266    1309 scope.go:117] "RemoveContainer" containerID="692fe70485f74eaf458fa25df7a77ad66398f4cc985d9a1873ca976150ab90d6"
	Oct 18 11:32:07 addons-162665 kubelet[1309]: I1018 11:32:07.483433    1309 scope.go:117] "RemoveContainer" containerID="692fe70485f74eaf458fa25df7a77ad66398f4cc985d9a1873ca976150ab90d6"
	Oct 18 11:32:07 addons-162665 kubelet[1309]: E1018 11:32:07.483830    1309 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"692fe70485f74eaf458fa25df7a77ad66398f4cc985d9a1873ca976150ab90d6\": container with ID starting with 692fe70485f74eaf458fa25df7a77ad66398f4cc985d9a1873ca976150ab90d6 not found: ID does not exist" containerID="692fe70485f74eaf458fa25df7a77ad66398f4cc985d9a1873ca976150ab90d6"
	Oct 18 11:32:07 addons-162665 kubelet[1309]: I1018 11:32:07.483899    1309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"692fe70485f74eaf458fa25df7a77ad66398f4cc985d9a1873ca976150ab90d6"} err="failed to get container status \"692fe70485f74eaf458fa25df7a77ad66398f4cc985d9a1873ca976150ab90d6\": rpc error: code = NotFound desc = could not find container \"692fe70485f74eaf458fa25df7a77ad66398f4cc985d9a1873ca976150ab90d6\": container with ID starting with 692fe70485f74eaf458fa25df7a77ad66398f4cc985d9a1873ca976150ab90d6 not found: ID does not exist"
	Oct 18 11:32:07 addons-162665 kubelet[1309]: I1018 11:32:07.898791    1309 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="559278b1-d034-44c3-a3e6-a0418bfb688b" path="/var/lib/kubelet/pods/559278b1-d034-44c3-a3e6-a0418bfb688b/volumes"
	Oct 18 11:32:15 addons-162665 kubelet[1309]: I1018 11:32:15.896162    1309 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-l95vf" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 11:32:26 addons-162665 kubelet[1309]: I1018 11:32:26.896625    1309 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-tsk7w" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 11:32:47 addons-162665 kubelet[1309]: I1018 11:32:47.947523    1309 scope.go:117] "RemoveContainer" containerID="457df04922f0610abbe12df75cb5afb3270344bff0c1b7efeebe46c9e0b19fde"
	Oct 18 11:32:47 addons-162665 kubelet[1309]: I1018 11:32:47.957700    1309 scope.go:117] "RemoveContainer" containerID="4c3268c13cd64bce4a88139dc9c8f87cf696a93b05c8e60d1cf131346bbd48d7"
	Oct 18 11:32:47 addons-162665 kubelet[1309]: I1018 11:32:47.966031    1309 scope.go:117] "RemoveContainer" containerID="5110ed4741964657e132ce1ca81b9084409bf959eb441c0ff9b78665d12acf96"
	Oct 18 11:32:58 addons-162665 kubelet[1309]: I1018 11:32:58.895862    1309 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-qtz57" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 11:33:41 addons-162665 kubelet[1309]: I1018 11:33:41.896287    1309 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-l95vf" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 11:33:56 addons-162665 kubelet[1309]: I1018 11:33:56.895822    1309 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-tsk7w" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 11:33:57 addons-162665 kubelet[1309]: I1018 11:33:57.117825    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/404c6add-caf5-4f54-b2b3-0359ba9d9aef-gcp-creds\") pod \"hello-world-app-5d498dc89-rqgc8\" (UID: \"404c6add-caf5-4f54-b2b3-0359ba9d9aef\") " pod="default/hello-world-app-5d498dc89-rqgc8"
	Oct 18 11:33:57 addons-162665 kubelet[1309]: I1018 11:33:57.117909    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzz6x\" (UniqueName: \"kubernetes.io/projected/404c6add-caf5-4f54-b2b3-0359ba9d9aef-kube-api-access-hzz6x\") pod \"hello-world-app-5d498dc89-rqgc8\" (UID: \"404c6add-caf5-4f54-b2b3-0359ba9d9aef\") " pod="default/hello-world-app-5d498dc89-rqgc8"
	
	
	==> storage-provisioner [875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3] <==
	W1018 11:33:34.448583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:36.451896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:36.457125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:38.460181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:38.464928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:40.467454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:40.470881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:42.473975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:42.477440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:44.479997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:44.484892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:46.488043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:46.491685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:48.494545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:48.498910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:50.501671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:50.506483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:52.509560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:52.513383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:54.516920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:54.520391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:56.523281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:56.526880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:58.529870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:33:58.533150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-162665 -n addons-162665
helpers_test.go:269: (dbg) Run:  kubectl --context addons-162665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-g2s9g ingress-nginx-admission-patch-d4dp5
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-162665 describe pod ingress-nginx-admission-create-g2s9g ingress-nginx-admission-patch-d4dp5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-162665 describe pod ingress-nginx-admission-create-g2s9g ingress-nginx-admission-patch-d4dp5: exit status 1 (59.169485ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-g2s9g" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-d4dp5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-162665 describe pod ingress-nginx-admission-create-g2s9g ingress-nginx-admission-patch-d4dp5: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162665 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (232.750444ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:33:59.468230   25205 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:33:59.468480   25205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:33:59.468488   25205 out.go:374] Setting ErrFile to fd 2...
	I1018 11:33:59.468492   25205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:33:59.468662   25205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:33:59.468936   25205 mustload.go:65] Loading cluster: addons-162665
	I1018 11:33:59.469302   25205 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:33:59.469322   25205 addons.go:606] checking whether the cluster is paused
	I1018 11:33:59.469422   25205 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:33:59.469437   25205 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:33:59.469807   25205 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:33:59.488485   25205 ssh_runner.go:195] Run: systemctl --version
	I1018 11:33:59.488546   25205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:33:59.505049   25205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:33:59.599188   25205 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:33:59.599270   25205 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:33:59.627869   25205 cri.go:89] found id: "ff53e54600e125a4c603286ddd3437b940e41d87e89c0a79234afde24316e759"
	I1018 11:33:59.627891   25205 cri.go:89] found id: "488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb"
	I1018 11:33:59.627896   25205 cri.go:89] found id: "a27fdd7026b29e61c0f124b27104ae3956d2aed3110d7b720128e24c0bacc3ec"
	I1018 11:33:59.627900   25205 cri.go:89] found id: "e58b8a219585a9ae96320c366b4c98f0c48358d21f7fb35e348fe8139059d7f9"
	I1018 11:33:59.627908   25205 cri.go:89] found id: "80ee1a432463a8ad3a4376b1f75e176fb6b537149aba4f986e224a7a531ba2b2"
	I1018 11:33:59.627912   25205 cri.go:89] found id: "1c7e5acf2100a7ffae62817db39ede8773b2ec7154e1024f6df4324466851822"
	I1018 11:33:59.627916   25205 cri.go:89] found id: "43a9f95eacc8289c6670fc316e3fc920654dc66aa76a198761a35537e6e3fcec"
	I1018 11:33:59.627920   25205 cri.go:89] found id: "7f162f04036aaf527574c6ac01010e2f827379e18bdc4eaf890380403057279e"
	I1018 11:33:59.627924   25205 cri.go:89] found id: "763f4d62397d6dc0f6a5e51925ddb584fb44a3f2bbed9f528918681dbbd6bef6"
	I1018 11:33:59.627943   25205 cri.go:89] found id: "230e9f4fd374710bc4d70889f01e8c646dbdbed6fe4ac29102ad60f3e1d98d18"
	I1018 11:33:59.627951   25205 cri.go:89] found id: "98ea2b43ee1f985889b32bdfd540789b4f79b7b665ae12fba712166d9fdfd68d"
	I1018 11:33:59.627956   25205 cri.go:89] found id: "c47f2661c734239e8c50f4aef2752bc8c27db6601ea3f442780cbb96bf3187fb"
	I1018 11:33:59.627960   25205 cri.go:89] found id: "7da1e14278c12f7ddce8a0a0317a7585f16e6a2cb0718634ffd628e8b1564fb1"
	I1018 11:33:59.627967   25205 cri.go:89] found id: "03c9856418e49f86ce20ae3c9932b0f0698840f611145c58c7b2d8866d2f1045"
	I1018 11:33:59.627972   25205 cri.go:89] found id: "2d9dfc50ea0d72c6edb7aeb1f80d3aeffcb60ff1588c6aa44fc4a740c0513602"
	I1018 11:33:59.627987   25205 cri.go:89] found id: "f9c877c63013ceff8748532507dbd72e3fc595da82cbcf0558b11733e58c209b"
	I1018 11:33:59.627995   25205 cri.go:89] found id: "07d2ff78db059878fffc6c128c991fcaa07e358737321e30a7ca63865510b349"
	I1018 11:33:59.628001   25205 cri.go:89] found id: "bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb"
	I1018 11:33:59.628005   25205 cri.go:89] found id: "875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3"
	I1018 11:33:59.628008   25205 cri.go:89] found id: "371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a"
	I1018 11:33:59.628011   25205 cri.go:89] found id: "63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb"
	I1018 11:33:59.628015   25205 cri.go:89] found id: "7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660"
	I1018 11:33:59.628020   25205 cri.go:89] found id: "4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92"
	I1018 11:33:59.628024   25205 cri.go:89] found id: "ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51"
	I1018 11:33:59.628027   25205 cri.go:89] found id: "a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956"
	I1018 11:33:59.628032   25205 cri.go:89] found id: ""
	I1018 11:33:59.628086   25205 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 11:33:59.642251   25205 out.go:203] 
	W1018 11:33:59.643917   25205 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:33:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:33:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 11:33:59.643937   25205 out.go:285] * 
	* 
	W1018 11:33:59.647046   25205 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 11:33:59.648384   25205 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-162665 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162665 addons disable ingress --alsologtostderr -v=1: exit status 11 (221.136415ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:33:59.693425   25272 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:33:59.693706   25272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:33:59.693715   25272 out.go:374] Setting ErrFile to fd 2...
	I1018 11:33:59.693719   25272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:33:59.693911   25272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:33:59.694156   25272 mustload.go:65] Loading cluster: addons-162665
	I1018 11:33:59.694496   25272 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:33:59.694515   25272 addons.go:606] checking whether the cluster is paused
	I1018 11:33:59.694594   25272 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:33:59.694606   25272 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:33:59.694963   25272 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:33:59.712098   25272 ssh_runner.go:195] Run: systemctl --version
	I1018 11:33:59.712161   25272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:33:59.729026   25272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:33:59.823181   25272 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:33:59.823283   25272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:33:59.850671   25272 cri.go:89] found id: "ff53e54600e125a4c603286ddd3437b940e41d87e89c0a79234afde24316e759"
	I1018 11:33:59.850699   25272 cri.go:89] found id: "488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb"
	I1018 11:33:59.850717   25272 cri.go:89] found id: "a27fdd7026b29e61c0f124b27104ae3956d2aed3110d7b720128e24c0bacc3ec"
	I1018 11:33:59.850724   25272 cri.go:89] found id: "e58b8a219585a9ae96320c366b4c98f0c48358d21f7fb35e348fe8139059d7f9"
	I1018 11:33:59.850729   25272 cri.go:89] found id: "80ee1a432463a8ad3a4376b1f75e176fb6b537149aba4f986e224a7a531ba2b2"
	I1018 11:33:59.850734   25272 cri.go:89] found id: "1c7e5acf2100a7ffae62817db39ede8773b2ec7154e1024f6df4324466851822"
	I1018 11:33:59.850739   25272 cri.go:89] found id: "43a9f95eacc8289c6670fc316e3fc920654dc66aa76a198761a35537e6e3fcec"
	I1018 11:33:59.850748   25272 cri.go:89] found id: "7f162f04036aaf527574c6ac01010e2f827379e18bdc4eaf890380403057279e"
	I1018 11:33:59.850751   25272 cri.go:89] found id: "763f4d62397d6dc0f6a5e51925ddb584fb44a3f2bbed9f528918681dbbd6bef6"
	I1018 11:33:59.850780   25272 cri.go:89] found id: "230e9f4fd374710bc4d70889f01e8c646dbdbed6fe4ac29102ad60f3e1d98d18"
	I1018 11:33:59.850789   25272 cri.go:89] found id: "98ea2b43ee1f985889b32bdfd540789b4f79b7b665ae12fba712166d9fdfd68d"
	I1018 11:33:59.850794   25272 cri.go:89] found id: "c47f2661c734239e8c50f4aef2752bc8c27db6601ea3f442780cbb96bf3187fb"
	I1018 11:33:59.850801   25272 cri.go:89] found id: "7da1e14278c12f7ddce8a0a0317a7585f16e6a2cb0718634ffd628e8b1564fb1"
	I1018 11:33:59.850804   25272 cri.go:89] found id: "03c9856418e49f86ce20ae3c9932b0f0698840f611145c58c7b2d8866d2f1045"
	I1018 11:33:59.850809   25272 cri.go:89] found id: "2d9dfc50ea0d72c6edb7aeb1f80d3aeffcb60ff1588c6aa44fc4a740c0513602"
	I1018 11:33:59.850818   25272 cri.go:89] found id: "f9c877c63013ceff8748532507dbd72e3fc595da82cbcf0558b11733e58c209b"
	I1018 11:33:59.850824   25272 cri.go:89] found id: "07d2ff78db059878fffc6c128c991fcaa07e358737321e30a7ca63865510b349"
	I1018 11:33:59.850828   25272 cri.go:89] found id: "bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb"
	I1018 11:33:59.850831   25272 cri.go:89] found id: "875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3"
	I1018 11:33:59.850833   25272 cri.go:89] found id: "371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a"
	I1018 11:33:59.850836   25272 cri.go:89] found id: "63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb"
	I1018 11:33:59.850838   25272 cri.go:89] found id: "7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660"
	I1018 11:33:59.850841   25272 cri.go:89] found id: "4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92"
	I1018 11:33:59.850843   25272 cri.go:89] found id: "ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51"
	I1018 11:33:59.850845   25272 cri.go:89] found id: "a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956"
	I1018 11:33:59.850848   25272 cri.go:89] found id: ""
	I1018 11:33:59.850889   25272 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 11:33:59.864282   25272 out.go:203] 
	W1018 11:33:59.865610   25272 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:33:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:33:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 11:33:59.865632   25272 out.go:285] * 
	* 
	W1018 11:33:59.868606   25272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 11:33:59.870112   25272 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-162665 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-vscpb" [2b393de7-3eb8-4b60-a1c3-21818053fff6] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003187753s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162665 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (224.951078ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:31:38.375524   20660 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:31:38.375810   20660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:38.375820   20660 out.go:374] Setting ErrFile to fd 2...
	I1018 11:31:38.375824   20660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:38.376033   20660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:31:38.376265   20660 mustload.go:65] Loading cluster: addons-162665
	I1018 11:31:38.376573   20660 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:38.376588   20660 addons.go:606] checking whether the cluster is paused
	I1018 11:31:38.376660   20660 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:38.376672   20660 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:31:38.377050   20660 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:31:38.394008   20660 ssh_runner.go:195] Run: systemctl --version
	I1018 11:31:38.394068   20660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:31:38.410812   20660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:31:38.505646   20660 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:31:38.505715   20660 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:31:38.533643   20660 cri.go:89] found id: "488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb"
	I1018 11:31:38.533669   20660 cri.go:89] found id: "a27fdd7026b29e61c0f124b27104ae3956d2aed3110d7b720128e24c0bacc3ec"
	I1018 11:31:38.533675   20660 cri.go:89] found id: "e58b8a219585a9ae96320c366b4c98f0c48358d21f7fb35e348fe8139059d7f9"
	I1018 11:31:38.533680   20660 cri.go:89] found id: "80ee1a432463a8ad3a4376b1f75e176fb6b537149aba4f986e224a7a531ba2b2"
	I1018 11:31:38.533684   20660 cri.go:89] found id: "1c7e5acf2100a7ffae62817db39ede8773b2ec7154e1024f6df4324466851822"
	I1018 11:31:38.533689   20660 cri.go:89] found id: "43a9f95eacc8289c6670fc316e3fc920654dc66aa76a198761a35537e6e3fcec"
	I1018 11:31:38.533691   20660 cri.go:89] found id: "7f162f04036aaf527574c6ac01010e2f827379e18bdc4eaf890380403057279e"
	I1018 11:31:38.533694   20660 cri.go:89] found id: "763f4d62397d6dc0f6a5e51925ddb584fb44a3f2bbed9f528918681dbbd6bef6"
	I1018 11:31:38.533696   20660 cri.go:89] found id: "230e9f4fd374710bc4d70889f01e8c646dbdbed6fe4ac29102ad60f3e1d98d18"
	I1018 11:31:38.533708   20660 cri.go:89] found id: "98ea2b43ee1f985889b32bdfd540789b4f79b7b665ae12fba712166d9fdfd68d"
	I1018 11:31:38.533713   20660 cri.go:89] found id: "c47f2661c734239e8c50f4aef2752bc8c27db6601ea3f442780cbb96bf3187fb"
	I1018 11:31:38.533717   20660 cri.go:89] found id: "7da1e14278c12f7ddce8a0a0317a7585f16e6a2cb0718634ffd628e8b1564fb1"
	I1018 11:31:38.533721   20660 cri.go:89] found id: "03c9856418e49f86ce20ae3c9932b0f0698840f611145c58c7b2d8866d2f1045"
	I1018 11:31:38.533725   20660 cri.go:89] found id: "2d9dfc50ea0d72c6edb7aeb1f80d3aeffcb60ff1588c6aa44fc4a740c0513602"
	I1018 11:31:38.533730   20660 cri.go:89] found id: "f9c877c63013ceff8748532507dbd72e3fc595da82cbcf0558b11733e58c209b"
	I1018 11:31:38.533737   20660 cri.go:89] found id: "07d2ff78db059878fffc6c128c991fcaa07e358737321e30a7ca63865510b349"
	I1018 11:31:38.533745   20660 cri.go:89] found id: "bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb"
	I1018 11:31:38.533751   20660 cri.go:89] found id: "875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3"
	I1018 11:31:38.533775   20660 cri.go:89] found id: "371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a"
	I1018 11:31:38.533781   20660 cri.go:89] found id: "63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb"
	I1018 11:31:38.533785   20660 cri.go:89] found id: "7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660"
	I1018 11:31:38.533789   20660 cri.go:89] found id: "4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92"
	I1018 11:31:38.533793   20660 cri.go:89] found id: "ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51"
	I1018 11:31:38.533797   20660 cri.go:89] found id: "a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956"
	I1018 11:31:38.533809   20660 cri.go:89] found id: ""
	I1018 11:31:38.533853   20660 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 11:31:38.548020   20660 out.go:203] 
	W1018 11:31:38.549414   20660 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 11:31:38.549433   20660 out.go:285] * 
	* 
	W1018 11:31:38.552378   20660 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 11:31:38.553800   20660 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-162665 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.907419ms
I1018 11:31:29.809716    9360 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1018 11:31:29.809734    9360 kapi.go:107] duration metric: took 3.214837ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-4fbgz" [7862dfcb-3720-49c5-a912-e836d1598eaa] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003829372s
addons_test.go:463: (dbg) Run:  kubectl --context addons-162665 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162665 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (231.781796ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:31:34.913683   20017 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:31:34.913850   20017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:34.913860   20017 out.go:374] Setting ErrFile to fd 2...
	I1018 11:31:34.913864   20017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:34.914090   20017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:31:34.914370   20017 mustload.go:65] Loading cluster: addons-162665
	I1018 11:31:34.914688   20017 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:34.914706   20017 addons.go:606] checking whether the cluster is paused
	I1018 11:31:34.914815   20017 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:34.914833   20017 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:31:34.915244   20017 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:31:34.933268   20017 ssh_runner.go:195] Run: systemctl --version
	I1018 11:31:34.933351   20017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:31:34.952750   20017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:31:35.049290   20017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:31:35.049388   20017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:31:35.077978   20017 cri.go:89] found id: "488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb"
	I1018 11:31:35.077997   20017 cri.go:89] found id: "a27fdd7026b29e61c0f124b27104ae3956d2aed3110d7b720128e24c0bacc3ec"
	I1018 11:31:35.078001   20017 cri.go:89] found id: "e58b8a219585a9ae96320c366b4c98f0c48358d21f7fb35e348fe8139059d7f9"
	I1018 11:31:35.078003   20017 cri.go:89] found id: "80ee1a432463a8ad3a4376b1f75e176fb6b537149aba4f986e224a7a531ba2b2"
	I1018 11:31:35.078006   20017 cri.go:89] found id: "1c7e5acf2100a7ffae62817db39ede8773b2ec7154e1024f6df4324466851822"
	I1018 11:31:35.078009   20017 cri.go:89] found id: "43a9f95eacc8289c6670fc316e3fc920654dc66aa76a198761a35537e6e3fcec"
	I1018 11:31:35.078011   20017 cri.go:89] found id: "7f162f04036aaf527574c6ac01010e2f827379e18bdc4eaf890380403057279e"
	I1018 11:31:35.078013   20017 cri.go:89] found id: "763f4d62397d6dc0f6a5e51925ddb584fb44a3f2bbed9f528918681dbbd6bef6"
	I1018 11:31:35.078015   20017 cri.go:89] found id: "230e9f4fd374710bc4d70889f01e8c646dbdbed6fe4ac29102ad60f3e1d98d18"
	I1018 11:31:35.078020   20017 cri.go:89] found id: "98ea2b43ee1f985889b32bdfd540789b4f79b7b665ae12fba712166d9fdfd68d"
	I1018 11:31:35.078024   20017 cri.go:89] found id: "c47f2661c734239e8c50f4aef2752bc8c27db6601ea3f442780cbb96bf3187fb"
	I1018 11:31:35.078027   20017 cri.go:89] found id: "7da1e14278c12f7ddce8a0a0317a7585f16e6a2cb0718634ffd628e8b1564fb1"
	I1018 11:31:35.078031   20017 cri.go:89] found id: "03c9856418e49f86ce20ae3c9932b0f0698840f611145c58c7b2d8866d2f1045"
	I1018 11:31:35.078035   20017 cri.go:89] found id: "2d9dfc50ea0d72c6edb7aeb1f80d3aeffcb60ff1588c6aa44fc4a740c0513602"
	I1018 11:31:35.078039   20017 cri.go:89] found id: "f9c877c63013ceff8748532507dbd72e3fc595da82cbcf0558b11733e58c209b"
	I1018 11:31:35.078046   20017 cri.go:89] found id: "07d2ff78db059878fffc6c128c991fcaa07e358737321e30a7ca63865510b349"
	I1018 11:31:35.078049   20017 cri.go:89] found id: "bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb"
	I1018 11:31:35.078056   20017 cri.go:89] found id: "875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3"
	I1018 11:31:35.078059   20017 cri.go:89] found id: "371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a"
	I1018 11:31:35.078061   20017 cri.go:89] found id: "63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb"
	I1018 11:31:35.078065   20017 cri.go:89] found id: "7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660"
	I1018 11:31:35.078068   20017 cri.go:89] found id: "4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92"
	I1018 11:31:35.078070   20017 cri.go:89] found id: "ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51"
	I1018 11:31:35.078072   20017 cri.go:89] found id: "a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956"
	I1018 11:31:35.078074   20017 cri.go:89] found id: ""
	I1018 11:31:35.078109   20017 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 11:31:35.092612   20017 out.go:203] 
	W1018 11:31:35.094193   20017 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 11:31:35.094232   20017 out.go:285] * 
	* 
	W1018 11:31:35.097202   20017 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 11:31:35.098629   20017 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-162665 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1018 11:31:29.806532    9360 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.224044ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-162665 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-162665 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [6e6f2241-e23d-4571-b283-6a00244796e6] Pending
helpers_test.go:352: "task-pv-pod" [6e6f2241-e23d-4571-b283-6a00244796e6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [6e6f2241-e23d-4571-b283-6a00244796e6] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003672555s
addons_test.go:572: (dbg) Run:  kubectl --context addons-162665 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-162665 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-162665 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-162665 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-162665 delete pod task-pv-pod: (1.207430774s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-162665 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-162665 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-162665 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [559278b1-d034-44c3-a3e6-a0418bfb688b] Pending
helpers_test.go:352: "task-pv-pod-restore" [559278b1-d034-44c3-a3e6-a0418bfb688b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [559278b1-d034-44c3-a3e6-a0418bfb688b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003843564s
addons_test.go:614: (dbg) Run:  kubectl --context addons-162665 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-162665 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-162665 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162665 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (230.017331ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:32:07.861044   23010 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:32:07.861301   23010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:32:07.861310   23010 out.go:374] Setting ErrFile to fd 2...
	I1018 11:32:07.861314   23010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:32:07.861534   23010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:32:07.861787   23010 mustload.go:65] Loading cluster: addons-162665
	I1018 11:32:07.862098   23010 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:32:07.862113   23010 addons.go:606] checking whether the cluster is paused
	I1018 11:32:07.862186   23010 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:32:07.862197   23010 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:32:07.862564   23010 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:32:07.879797   23010 ssh_runner.go:195] Run: systemctl --version
	I1018 11:32:07.879862   23010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:32:07.899149   23010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:32:07.995402   23010 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:32:07.995476   23010 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:32:08.023828   23010 cri.go:89] found id: "ff53e54600e125a4c603286ddd3437b940e41d87e89c0a79234afde24316e759"
	I1018 11:32:08.023853   23010 cri.go:89] found id: "488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb"
	I1018 11:32:08.023858   23010 cri.go:89] found id: "a27fdd7026b29e61c0f124b27104ae3956d2aed3110d7b720128e24c0bacc3ec"
	I1018 11:32:08.023862   23010 cri.go:89] found id: "e58b8a219585a9ae96320c366b4c98f0c48358d21f7fb35e348fe8139059d7f9"
	I1018 11:32:08.023864   23010 cri.go:89] found id: "80ee1a432463a8ad3a4376b1f75e176fb6b537149aba4f986e224a7a531ba2b2"
	I1018 11:32:08.023867   23010 cri.go:89] found id: "1c7e5acf2100a7ffae62817db39ede8773b2ec7154e1024f6df4324466851822"
	I1018 11:32:08.023870   23010 cri.go:89] found id: "43a9f95eacc8289c6670fc316e3fc920654dc66aa76a198761a35537e6e3fcec"
	I1018 11:32:08.023873   23010 cri.go:89] found id: "7f162f04036aaf527574c6ac01010e2f827379e18bdc4eaf890380403057279e"
	I1018 11:32:08.023875   23010 cri.go:89] found id: "763f4d62397d6dc0f6a5e51925ddb584fb44a3f2bbed9f528918681dbbd6bef6"
	I1018 11:32:08.023880   23010 cri.go:89] found id: "230e9f4fd374710bc4d70889f01e8c646dbdbed6fe4ac29102ad60f3e1d98d18"
	I1018 11:32:08.023882   23010 cri.go:89] found id: "98ea2b43ee1f985889b32bdfd540789b4f79b7b665ae12fba712166d9fdfd68d"
	I1018 11:32:08.023884   23010 cri.go:89] found id: "c47f2661c734239e8c50f4aef2752bc8c27db6601ea3f442780cbb96bf3187fb"
	I1018 11:32:08.023899   23010 cri.go:89] found id: "7da1e14278c12f7ddce8a0a0317a7585f16e6a2cb0718634ffd628e8b1564fb1"
	I1018 11:32:08.023902   23010 cri.go:89] found id: "03c9856418e49f86ce20ae3c9932b0f0698840f611145c58c7b2d8866d2f1045"
	I1018 11:32:08.023904   23010 cri.go:89] found id: "2d9dfc50ea0d72c6edb7aeb1f80d3aeffcb60ff1588c6aa44fc4a740c0513602"
	I1018 11:32:08.023908   23010 cri.go:89] found id: "f9c877c63013ceff8748532507dbd72e3fc595da82cbcf0558b11733e58c209b"
	I1018 11:32:08.023910   23010 cri.go:89] found id: "07d2ff78db059878fffc6c128c991fcaa07e358737321e30a7ca63865510b349"
	I1018 11:32:08.023914   23010 cri.go:89] found id: "bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb"
	I1018 11:32:08.023916   23010 cri.go:89] found id: "875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3"
	I1018 11:32:08.023919   23010 cri.go:89] found id: "371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a"
	I1018 11:32:08.023923   23010 cri.go:89] found id: "63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb"
	I1018 11:32:08.023926   23010 cri.go:89] found id: "7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660"
	I1018 11:32:08.023928   23010 cri.go:89] found id: "4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92"
	I1018 11:32:08.023930   23010 cri.go:89] found id: "ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51"
	I1018 11:32:08.023933   23010 cri.go:89] found id: "a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956"
	I1018 11:32:08.023935   23010 cri.go:89] found id: ""
	I1018 11:32:08.023976   23010 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 11:32:08.038262   23010 out.go:203] 
	W1018 11:32:08.039578   23010 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:32:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:32:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 11:32:08.039597   23010 out.go:285] * 
	* 
	W1018 11:32:08.042667   23010 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 11:32:08.044034   23010 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-162665 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162665 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (229.379141ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:32:08.089014   23072 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:32:08.089348   23072 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:32:08.089358   23072 out.go:374] Setting ErrFile to fd 2...
	I1018 11:32:08.089366   23072 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:32:08.089601   23072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:32:08.089924   23072 mustload.go:65] Loading cluster: addons-162665
	I1018 11:32:08.090256   23072 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:32:08.090275   23072 addons.go:606] checking whether the cluster is paused
	I1018 11:32:08.090372   23072 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:32:08.090389   23072 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:32:08.090823   23072 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:32:08.112096   23072 ssh_runner.go:195] Run: systemctl --version
	I1018 11:32:08.112163   23072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:32:08.130152   23072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:32:08.225191   23072 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:32:08.225269   23072 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:32:08.253647   23072 cri.go:89] found id: "ff53e54600e125a4c603286ddd3437b940e41d87e89c0a79234afde24316e759"
	I1018 11:32:08.253667   23072 cri.go:89] found id: "488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb"
	I1018 11:32:08.253671   23072 cri.go:89] found id: "a27fdd7026b29e61c0f124b27104ae3956d2aed3110d7b720128e24c0bacc3ec"
	I1018 11:32:08.253673   23072 cri.go:89] found id: "e58b8a219585a9ae96320c366b4c98f0c48358d21f7fb35e348fe8139059d7f9"
	I1018 11:32:08.253676   23072 cri.go:89] found id: "80ee1a432463a8ad3a4376b1f75e176fb6b537149aba4f986e224a7a531ba2b2"
	I1018 11:32:08.253679   23072 cri.go:89] found id: "1c7e5acf2100a7ffae62817db39ede8773b2ec7154e1024f6df4324466851822"
	I1018 11:32:08.253681   23072 cri.go:89] found id: "43a9f95eacc8289c6670fc316e3fc920654dc66aa76a198761a35537e6e3fcec"
	I1018 11:32:08.253683   23072 cri.go:89] found id: "7f162f04036aaf527574c6ac01010e2f827379e18bdc4eaf890380403057279e"
	I1018 11:32:08.253686   23072 cri.go:89] found id: "763f4d62397d6dc0f6a5e51925ddb584fb44a3f2bbed9f528918681dbbd6bef6"
	I1018 11:32:08.253691   23072 cri.go:89] found id: "230e9f4fd374710bc4d70889f01e8c646dbdbed6fe4ac29102ad60f3e1d98d18"
	I1018 11:32:08.253693   23072 cri.go:89] found id: "98ea2b43ee1f985889b32bdfd540789b4f79b7b665ae12fba712166d9fdfd68d"
	I1018 11:32:08.253695   23072 cri.go:89] found id: "c47f2661c734239e8c50f4aef2752bc8c27db6601ea3f442780cbb96bf3187fb"
	I1018 11:32:08.253698   23072 cri.go:89] found id: "7da1e14278c12f7ddce8a0a0317a7585f16e6a2cb0718634ffd628e8b1564fb1"
	I1018 11:32:08.253700   23072 cri.go:89] found id: "03c9856418e49f86ce20ae3c9932b0f0698840f611145c58c7b2d8866d2f1045"
	I1018 11:32:08.253702   23072 cri.go:89] found id: "2d9dfc50ea0d72c6edb7aeb1f80d3aeffcb60ff1588c6aa44fc4a740c0513602"
	I1018 11:32:08.253706   23072 cri.go:89] found id: "f9c877c63013ceff8748532507dbd72e3fc595da82cbcf0558b11733e58c209b"
	I1018 11:32:08.253709   23072 cri.go:89] found id: "07d2ff78db059878fffc6c128c991fcaa07e358737321e30a7ca63865510b349"
	I1018 11:32:08.253714   23072 cri.go:89] found id: "bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb"
	I1018 11:32:08.253723   23072 cri.go:89] found id: "875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3"
	I1018 11:32:08.253737   23072 cri.go:89] found id: "371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a"
	I1018 11:32:08.253745   23072 cri.go:89] found id: "63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb"
	I1018 11:32:08.253749   23072 cri.go:89] found id: "7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660"
	I1018 11:32:08.253752   23072 cri.go:89] found id: "4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92"
	I1018 11:32:08.253756   23072 cri.go:89] found id: "ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51"
	I1018 11:32:08.253782   23072 cri.go:89] found id: "a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956"
	I1018 11:32:08.253785   23072 cri.go:89] found id: ""
	I1018 11:32:08.253828   23072 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 11:32:08.267521   23072 out.go:203] 
	W1018 11:32:08.269101   23072 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:32:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:32:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 11:32:08.269124   23072 out.go:285] * 
	* 
	W1018 11:32:08.272547   23072 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 11:32:08.273839   23072 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-162665 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (38.47s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-162665 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-162665 --alsologtostderr -v=1: exit status 11 (245.75762ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:31:29.854707   19132 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:31:29.855076   19132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:29.855092   19132 out.go:374] Setting ErrFile to fd 2...
	I1018 11:31:29.855099   19132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:29.855410   19132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:31:29.855798   19132 mustload.go:65] Loading cluster: addons-162665
	I1018 11:31:29.856296   19132 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:29.856324   19132 addons.go:606] checking whether the cluster is paused
	I1018 11:31:29.856460   19132 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:29.856478   19132 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:31:29.857079   19132 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:31:29.878394   19132 ssh_runner.go:195] Run: systemctl --version
	I1018 11:31:29.878472   19132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:31:29.899468   19132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:31:29.998085   19132 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:31:29.998199   19132 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:31:30.027847   19132 cri.go:89] found id: "488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb"
	I1018 11:31:30.027876   19132 cri.go:89] found id: "a27fdd7026b29e61c0f124b27104ae3956d2aed3110d7b720128e24c0bacc3ec"
	I1018 11:31:30.027881   19132 cri.go:89] found id: "e58b8a219585a9ae96320c366b4c98f0c48358d21f7fb35e348fe8139059d7f9"
	I1018 11:31:30.027884   19132 cri.go:89] found id: "80ee1a432463a8ad3a4376b1f75e176fb6b537149aba4f986e224a7a531ba2b2"
	I1018 11:31:30.027887   19132 cri.go:89] found id: "1c7e5acf2100a7ffae62817db39ede8773b2ec7154e1024f6df4324466851822"
	I1018 11:31:30.027890   19132 cri.go:89] found id: "43a9f95eacc8289c6670fc316e3fc920654dc66aa76a198761a35537e6e3fcec"
	I1018 11:31:30.027893   19132 cri.go:89] found id: "7f162f04036aaf527574c6ac01010e2f827379e18bdc4eaf890380403057279e"
	I1018 11:31:30.027895   19132 cri.go:89] found id: "763f4d62397d6dc0f6a5e51925ddb584fb44a3f2bbed9f528918681dbbd6bef6"
	I1018 11:31:30.027897   19132 cri.go:89] found id: "230e9f4fd374710bc4d70889f01e8c646dbdbed6fe4ac29102ad60f3e1d98d18"
	I1018 11:31:30.027906   19132 cri.go:89] found id: "98ea2b43ee1f985889b32bdfd540789b4f79b7b665ae12fba712166d9fdfd68d"
	I1018 11:31:30.027908   19132 cri.go:89] found id: "c47f2661c734239e8c50f4aef2752bc8c27db6601ea3f442780cbb96bf3187fb"
	I1018 11:31:30.027910   19132 cri.go:89] found id: "7da1e14278c12f7ddce8a0a0317a7585f16e6a2cb0718634ffd628e8b1564fb1"
	I1018 11:31:30.027913   19132 cri.go:89] found id: "03c9856418e49f86ce20ae3c9932b0f0698840f611145c58c7b2d8866d2f1045"
	I1018 11:31:30.027915   19132 cri.go:89] found id: "2d9dfc50ea0d72c6edb7aeb1f80d3aeffcb60ff1588c6aa44fc4a740c0513602"
	I1018 11:31:30.027918   19132 cri.go:89] found id: "f9c877c63013ceff8748532507dbd72e3fc595da82cbcf0558b11733e58c209b"
	I1018 11:31:30.027927   19132 cri.go:89] found id: "07d2ff78db059878fffc6c128c991fcaa07e358737321e30a7ca63865510b349"
	I1018 11:31:30.027935   19132 cri.go:89] found id: "bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb"
	I1018 11:31:30.027941   19132 cri.go:89] found id: "875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3"
	I1018 11:31:30.027945   19132 cri.go:89] found id: "371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a"
	I1018 11:31:30.027949   19132 cri.go:89] found id: "63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb"
	I1018 11:31:30.027953   19132 cri.go:89] found id: "7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660"
	I1018 11:31:30.027956   19132 cri.go:89] found id: "4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92"
	I1018 11:31:30.027963   19132 cri.go:89] found id: "ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51"
	I1018 11:31:30.027968   19132 cri.go:89] found id: "a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956"
	I1018 11:31:30.027980   19132 cri.go:89] found id: ""
	I1018 11:31:30.028030   19132 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 11:31:30.041491   19132 out.go:203] 
	W1018 11:31:30.042749   19132 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 11:31:30.042790   19132 out.go:285] * 
	* 
	W1018 11:31:30.045793   19132 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 11:31:30.047078   19132 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-162665 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-162665
helpers_test.go:243: (dbg) docker inspect addons-162665:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7255d06b4d1908780462c2b650239ed72b8b59a2e1189040336e3fa2fac9f38f",
	        "Created": "2025-10-18T11:29:33.405172816Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11346,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T11:29:33.455561245Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/7255d06b4d1908780462c2b650239ed72b8b59a2e1189040336e3fa2fac9f38f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7255d06b4d1908780462c2b650239ed72b8b59a2e1189040336e3fa2fac9f38f/hostname",
	        "HostsPath": "/var/lib/docker/containers/7255d06b4d1908780462c2b650239ed72b8b59a2e1189040336e3fa2fac9f38f/hosts",
	        "LogPath": "/var/lib/docker/containers/7255d06b4d1908780462c2b650239ed72b8b59a2e1189040336e3fa2fac9f38f/7255d06b4d1908780462c2b650239ed72b8b59a2e1189040336e3fa2fac9f38f-json.log",
	        "Name": "/addons-162665",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-162665:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-162665",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7255d06b4d1908780462c2b650239ed72b8b59a2e1189040336e3fa2fac9f38f",
	                "LowerDir": "/var/lib/docker/overlay2/730abfb8ce2a77240121e1cec64652d711005133a584af9c21d9663ddd02a2cc-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/730abfb8ce2a77240121e1cec64652d711005133a584af9c21d9663ddd02a2cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/730abfb8ce2a77240121e1cec64652d711005133a584af9c21d9663ddd02a2cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/730abfb8ce2a77240121e1cec64652d711005133a584af9c21d9663ddd02a2cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-162665",
	                "Source": "/var/lib/docker/volumes/addons-162665/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-162665",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-162665",
	                "name.minikube.sigs.k8s.io": "addons-162665",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aac1b2db2b31b8e260b2c1c78bffc1a3353fd7e78c0c611ff8d59c7ad8bd9c15",
	            "SandboxKey": "/var/run/docker/netns/aac1b2db2b31",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-162665": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:43:ed:8d:ee:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "601a2ca07e5ff6602239981e74e84e169b74a70321fbdeed94c00633a93b6311",
	                    "EndpointID": "bfc92469d12465a28a8a2951ec0a54cb92c2a831e9cb335da869c131e445089d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-162665",
	                        "7255d06b4d19"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-162665 -n addons-162665
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-162665 logs -n 25: (1.131934665s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-584755 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-584755   │ jenkins │ v1.37.0 │ 18 Oct 25 11:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ delete  │ -p download-only-584755                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-584755   │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ start   │ -o=json --download-only -p download-only-147645 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-147645   │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ delete  │ -p download-only-147645                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-147645   │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ delete  │ -p download-only-584755                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-584755   │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ delete  │ -p download-only-147645                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-147645   │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ start   │ --download-only -p download-docker-063309 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-063309 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │                     │
	│ delete  │ -p download-docker-063309                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-063309 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ start   │ --download-only -p binary-mirror-525445 --alsologtostderr --binary-mirror http://127.0.0.1:46875 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-525445   │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │                     │
	│ delete  │ -p binary-mirror-525445                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-525445   │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ addons  │ enable dashboard -p addons-162665                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-162665          │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-162665                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-162665          │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │                     │
	│ start   │ -p addons-162665 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-162665          │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:31 UTC │
	│ addons  │ addons-162665 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-162665          │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	│ addons  │ addons-162665 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-162665          │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-162665 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-162665          │ jenkins │ v1.37.0 │ 18 Oct 25 11:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 11:29:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 11:29:08.517995   10685 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:29:08.518227   10685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:29:08.518235   10685 out.go:374] Setting ErrFile to fd 2...
	I1018 11:29:08.518239   10685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:29:08.518432   10685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:29:08.518968   10685 out.go:368] Setting JSON to false
	I1018 11:29:08.519711   10685 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":697,"bootTime":1760786252,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 11:29:08.519806   10685 start.go:141] virtualization: kvm guest
	I1018 11:29:08.521741   10685 out.go:179] * [addons-162665] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 11:29:08.522917   10685 notify.go:220] Checking for updates...
	I1018 11:29:08.522957   10685 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 11:29:08.524594   10685 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:29:08.526057   10685 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 11:29:08.527386   10685 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 11:29:08.528849   10685 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 11:29:08.530007   10685 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 11:29:08.531314   10685 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:29:08.553016   10685 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 11:29:08.553102   10685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:29:08.610185   10685 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-18 11:29:08.599830107 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 11:29:08.610287   10685 docker.go:318] overlay module found
	I1018 11:29:08.611978   10685 out.go:179] * Using the docker driver based on user configuration
	I1018 11:29:08.613157   10685 start.go:305] selected driver: docker
	I1018 11:29:08.613173   10685 start.go:925] validating driver "docker" against <nil>
	I1018 11:29:08.613191   10685 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 11:29:08.613708   10685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:29:08.672299   10685 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-18 11:29:08.663075027 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 11:29:08.672494   10685 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 11:29:08.672695   10685 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 11:29:08.674459   10685 out.go:179] * Using Docker driver with root privileges
	I1018 11:29:08.675635   10685 cni.go:84] Creating CNI manager for ""
	I1018 11:29:08.675697   10685 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 11:29:08.675707   10685 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 11:29:08.675792   10685 start.go:349] cluster config:
	{Name:addons-162665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-162665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 11:29:08.677283   10685 out.go:179] * Starting "addons-162665" primary control-plane node in "addons-162665" cluster
	I1018 11:29:08.678603   10685 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 11:29:08.679856   10685 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 11:29:08.681031   10685 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 11:29:08.681075   10685 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 11:29:08.681087   10685 cache.go:58] Caching tarball of preloaded images
	I1018 11:29:08.681139   10685 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 11:29:08.681182   10685 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 11:29:08.681194   10685 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 11:29:08.681549   10685 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/config.json ...
	I1018 11:29:08.681574   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/config.json: {Name:mke74a72cf962e4e13d5f241fc60a68ff68e6d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:08.697060   10685 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 11:29:08.697177   10685 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 11:29:08.697193   10685 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 11:29:08.697197   10685 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 11:29:08.697210   10685 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 11:29:08.697219   10685 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 11:29:21.168308   10685 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 11:29:21.168345   10685 cache.go:232] Successfully downloaded all kic artifacts
	I1018 11:29:21.168420   10685 start.go:360] acquireMachinesLock for addons-162665: {Name:mk4d42d0ef42e24680ba09e77813105e1317a459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 11:29:21.168537   10685 start.go:364] duration metric: took 87.239µs to acquireMachinesLock for "addons-162665"
	I1018 11:29:21.168568   10685 start.go:93] Provisioning new machine with config: &{Name:addons-162665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-162665 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 11:29:21.168643   10685 start.go:125] createHost starting for "" (driver="docker")
	I1018 11:29:21.170597   10685 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 11:29:21.170844   10685 start.go:159] libmachine.API.Create for "addons-162665" (driver="docker")
	I1018 11:29:21.170878   10685 client.go:168] LocalClient.Create starting
	I1018 11:29:21.171019   10685 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem
	I1018 11:29:21.927676   10685 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem
	I1018 11:29:22.136320   10685 cli_runner.go:164] Run: docker network inspect addons-162665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 11:29:22.152986   10685 cli_runner.go:211] docker network inspect addons-162665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 11:29:22.153079   10685 network_create.go:284] running [docker network inspect addons-162665] to gather additional debugging logs...
	I1018 11:29:22.153096   10685 cli_runner.go:164] Run: docker network inspect addons-162665
	W1018 11:29:22.169093   10685 cli_runner.go:211] docker network inspect addons-162665 returned with exit code 1
	I1018 11:29:22.169121   10685 network_create.go:287] error running [docker network inspect addons-162665]: docker network inspect addons-162665: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-162665 not found
	I1018 11:29:22.169137   10685 network_create.go:289] output of [docker network inspect addons-162665]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-162665 not found
	
	** /stderr **
	I1018 11:29:22.169249   10685 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 11:29:22.186125   10685 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ce89a0}
	I1018 11:29:22.186159   10685 network_create.go:124] attempt to create docker network addons-162665 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 11:29:22.186198   10685 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-162665 addons-162665
	I1018 11:29:22.244143   10685 network_create.go:108] docker network addons-162665 192.168.49.0/24 created
	I1018 11:29:22.244181   10685 kic.go:121] calculated static IP "192.168.49.2" for the "addons-162665" container
	I1018 11:29:22.244242   10685 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 11:29:22.260239   10685 cli_runner.go:164] Run: docker volume create addons-162665 --label name.minikube.sigs.k8s.io=addons-162665 --label created_by.minikube.sigs.k8s.io=true
	I1018 11:29:22.277274   10685 oci.go:103] Successfully created a docker volume addons-162665
	I1018 11:29:22.277357   10685 cli_runner.go:164] Run: docker run --rm --name addons-162665-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-162665 --entrypoint /usr/bin/test -v addons-162665:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 11:29:28.949328   10685 cli_runner.go:217] Completed: docker run --rm --name addons-162665-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-162665 --entrypoint /usr/bin/test -v addons-162665:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (6.671922759s)
	I1018 11:29:28.949355   10685 oci.go:107] Successfully prepared a docker volume addons-162665
	I1018 11:29:28.949368   10685 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 11:29:28.949386   10685 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 11:29:28.949434   10685 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-162665:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 11:29:33.334221   10685 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-162665:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.384734791s)
	I1018 11:29:33.334253   10685 kic.go:203] duration metric: took 4.384864975s to extract preloaded images to volume ...
	W1018 11:29:33.334334   10685 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 11:29:33.334367   10685 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 11:29:33.334401   10685 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 11:29:33.389351   10685 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-162665 --name addons-162665 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-162665 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-162665 --network addons-162665 --ip 192.168.49.2 --volume addons-162665:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 11:29:33.712260   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Running}}
	I1018 11:29:33.731670   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:33.750742   10685 cli_runner.go:164] Run: docker exec addons-162665 stat /var/lib/dpkg/alternatives/iptables
	I1018 11:29:33.801386   10685 oci.go:144] the created container "addons-162665" has a running status.
	I1018 11:29:33.801414   10685 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa...
	I1018 11:29:33.962487   10685 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 11:29:33.990901   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:34.009083   10685 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 11:29:34.009103   10685 kic_runner.go:114] Args: [docker exec --privileged addons-162665 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 11:29:34.061624   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:34.079458   10685 machine.go:93] provisionDockerMachine start ...
	I1018 11:29:34.079543   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:34.098903   10685 main.go:141] libmachine: Using SSH client type: native
	I1018 11:29:34.099130   10685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 11:29:34.099145   10685 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 11:29:34.233667   10685 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-162665
	
	I1018 11:29:34.233693   10685 ubuntu.go:182] provisioning hostname "addons-162665"
	I1018 11:29:34.233740   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:34.252465   10685 main.go:141] libmachine: Using SSH client type: native
	I1018 11:29:34.252711   10685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 11:29:34.252734   10685 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-162665 && echo "addons-162665" | sudo tee /etc/hostname
	I1018 11:29:34.394659   10685 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-162665
	
	I1018 11:29:34.394738   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:34.413360   10685 main.go:141] libmachine: Using SSH client type: native
	I1018 11:29:34.413597   10685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 11:29:34.413625   10685 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-162665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-162665/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-162665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 11:29:34.545426   10685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 11:29:34.545457   10685 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 11:29:34.545507   10685 ubuntu.go:190] setting up certificates
	I1018 11:29:34.545520   10685 provision.go:84] configureAuth start
	I1018 11:29:34.545578   10685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-162665
	I1018 11:29:34.562843   10685 provision.go:143] copyHostCerts
	I1018 11:29:34.562909   10685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 11:29:34.563027   10685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 11:29:34.563110   10685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 11:29:34.563168   10685 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.addons-162665 san=[127.0.0.1 192.168.49.2 addons-162665 localhost minikube]
	I1018 11:29:35.074978   10685 provision.go:177] copyRemoteCerts
	I1018 11:29:35.075034   10685 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 11:29:35.075068   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:35.092177   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:35.187939   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 11:29:35.206548   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 11:29:35.223997   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 11:29:35.240789   10685 provision.go:87] duration metric: took 695.256127ms to configureAuth
	I1018 11:29:35.240812   10685 ubuntu.go:206] setting minikube options for container-runtime
	I1018 11:29:35.240989   10685 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:29:35.241123   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:35.258234   10685 main.go:141] libmachine: Using SSH client type: native
	I1018 11:29:35.258474   10685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 11:29:35.258493   10685 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 11:29:35.495425   10685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 11:29:35.495446   10685 machine.go:96] duration metric: took 1.415968808s to provisionDockerMachine
	I1018 11:29:35.495455   10685 client.go:171] duration metric: took 14.324567518s to LocalClient.Create
	I1018 11:29:35.495491   10685 start.go:167] duration metric: took 14.324640696s to libmachine.API.Create "addons-162665"
	I1018 11:29:35.495501   10685 start.go:293] postStartSetup for "addons-162665" (driver="docker")
	I1018 11:29:35.495511   10685 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 11:29:35.495559   10685 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 11:29:35.495588   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:35.513721   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:35.610862   10685 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 11:29:35.614281   10685 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 11:29:35.614315   10685 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 11:29:35.614329   10685 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 11:29:35.614384   10685 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 11:29:35.614408   10685 start.go:296] duration metric: took 118.902307ms for postStartSetup
	I1018 11:29:35.614661   10685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-162665
	I1018 11:29:35.631926   10685 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/config.json ...
	I1018 11:29:35.632186   10685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 11:29:35.632243   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:35.650023   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:35.742102   10685 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 11:29:35.746581   10685 start.go:128] duration metric: took 14.577923254s to createHost
	I1018 11:29:35.746608   10685 start.go:83] releasing machines lock for "addons-162665", held for 14.578054374s
	I1018 11:29:35.746671   10685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-162665
	I1018 11:29:35.764232   10685 ssh_runner.go:195] Run: cat /version.json
	I1018 11:29:35.764276   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:35.764331   10685 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 11:29:35.764387   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:35.783024   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:35.783262   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:35.928218   10685 ssh_runner.go:195] Run: systemctl --version
	I1018 11:29:35.934505   10685 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 11:29:35.968809   10685 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 11:29:35.973630   10685 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 11:29:35.973688   10685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 11:29:36.000070   10685 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 11:29:36.000092   10685 start.go:495] detecting cgroup driver to use...
	I1018 11:29:36.000132   10685 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 11:29:36.000181   10685 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 11:29:36.015711   10685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 11:29:36.027721   10685 docker.go:218] disabling cri-docker service (if available) ...
	I1018 11:29:36.027787   10685 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 11:29:36.044264   10685 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 11:29:36.061680   10685 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 11:29:36.138588   10685 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 11:29:36.221342   10685 docker.go:234] disabling docker service ...
	I1018 11:29:36.221395   10685 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 11:29:36.239646   10685 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 11:29:36.252480   10685 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 11:29:36.334445   10685 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 11:29:36.410171   10685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 11:29:36.422565   10685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 11:29:36.436330   10685 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 11:29:36.436390   10685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:29:36.446211   10685 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 11:29:36.446267   10685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:29:36.454852   10685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:29:36.463674   10685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:29:36.472372   10685 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 11:29:36.480603   10685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:29:36.488955   10685 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:29:36.502656   10685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 11:29:36.511224   10685 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 11:29:36.518258   10685 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 11:29:36.518338   10685 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 11:29:36.529862   10685 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 11:29:36.537169   10685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 11:29:36.610401   10685 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 11:29:36.711906   10685 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 11:29:36.711969   10685 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 11:29:36.715836   10685 start.go:563] Will wait 60s for crictl version
	I1018 11:29:36.715904   10685 ssh_runner.go:195] Run: which crictl
	I1018 11:29:36.719436   10685 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 11:29:36.742964   10685 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 11:29:36.743121   10685 ssh_runner.go:195] Run: crio --version
	I1018 11:29:36.770082   10685 ssh_runner.go:195] Run: crio --version
	I1018 11:29:36.798787   10685 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 11:29:36.800289   10685 cli_runner.go:164] Run: docker network inspect addons-162665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 11:29:36.816909   10685 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 11:29:36.820931   10685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 11:29:36.831122   10685 kubeadm.go:883] updating cluster {Name:addons-162665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-162665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 11:29:36.831301   10685 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 11:29:36.831372   10685 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 11:29:36.862675   10685 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 11:29:36.862696   10685 crio.go:433] Images already preloaded, skipping extraction
	I1018 11:29:36.862737   10685 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 11:29:36.887399   10685 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 11:29:36.887420   10685 cache_images.go:85] Images are preloaded, skipping loading
	I1018 11:29:36.887429   10685 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 11:29:36.887529   10685 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-162665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-162665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 11:29:36.887601   10685 ssh_runner.go:195] Run: crio config
	I1018 11:29:36.932490   10685 cni.go:84] Creating CNI manager for ""
	I1018 11:29:36.932512   10685 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 11:29:36.932552   10685 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 11:29:36.932579   10685 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-162665 NodeName:addons-162665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 11:29:36.932704   10685 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-162665"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 11:29:36.932781   10685 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 11:29:36.940885   10685 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 11:29:36.940943   10685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 11:29:36.948584   10685 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 11:29:36.961142   10685 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 11:29:36.976455   10685 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1018 11:29:36.989250   10685 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 11:29:36.993010   10685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 11:29:37.002984   10685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 11:29:37.083407   10685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 11:29:37.105193   10685 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665 for IP: 192.168.49.2
	I1018 11:29:37.105212   10685 certs.go:195] generating shared ca certs ...
	I1018 11:29:37.105226   10685 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.105385   10685 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 11:29:37.192357   10685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt ...
	I1018 11:29:37.192385   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt: {Name:mka3ecec2b2aab84aa27b1b0354e5b9efdba318a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.192558   10685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key ...
	I1018 11:29:37.192569   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key: {Name:mk95ba60734f15d990e406b8e853279868b97f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.192641   10685 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 11:29:37.231745   10685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt ...
	I1018 11:29:37.231781   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt: {Name:mkbdfa0d25f46dfa7ffa6b423e0f0cb725223088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.231942   10685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key ...
	I1018 11:29:37.231953   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key: {Name:mkfa8fca55a7201b9fd1abd7bc17b53c0ae00382 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.232021   10685 certs.go:257] generating profile certs ...
	I1018 11:29:37.232069   10685 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.key
	I1018 11:29:37.232083   10685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt with IP's: []
	I1018 11:29:37.419385   10685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt ...
	I1018 11:29:37.419417   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: {Name:mkd8e6e07178e32a6c6afda800f9666e4077ecdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.419574   10685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.key ...
	I1018 11:29:37.419583   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.key: {Name:mkfe4a76a9bdecea041f4abb5ca5f33db085bcdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.419654   10685 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.key.bb988cbf
	I1018 11:29:37.419672   10685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.crt.bb988cbf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 11:29:37.591106   10685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.crt.bb988cbf ...
	I1018 11:29:37.591136   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.crt.bb988cbf: {Name:mkf7ae67e94012cf306ecc751f58fae89e6c3c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.591325   10685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.key.bb988cbf ...
	I1018 11:29:37.591339   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.key.bb988cbf: {Name:mke791e6e1826669245c34107ea153fbe8e2b298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.592351   10685 certs.go:382] copying /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.crt.bb988cbf -> /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.crt
	I1018 11:29:37.592477   10685 certs.go:386] copying /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.key.bb988cbf -> /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.key
	I1018 11:29:37.592535   10685 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/proxy-client.key
	I1018 11:29:37.592554   10685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/proxy-client.crt with IP's: []
	I1018 11:29:37.694911   10685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/proxy-client.crt ...
	I1018 11:29:37.694948   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/proxy-client.crt: {Name:mk0e39fff8885e87a032b546fca4640d3503eea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.695162   10685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/proxy-client.key ...
	I1018 11:29:37.695176   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/proxy-client.key: {Name:mk998fa9a1a5812677e19484993b3cd5927a59a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:37.695361   10685 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 11:29:37.695407   10685 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 11:29:37.695438   10685 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 11:29:37.695459   10685 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 11:29:37.696009   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 11:29:37.714735   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 11:29:37.732261   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 11:29:37.750589   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 11:29:37.768577   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 11:29:37.786606   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 11:29:37.803713   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 11:29:37.821424   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 11:29:37.839529   10685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 11:29:37.858533   10685 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 11:29:37.870599   10685 ssh_runner.go:195] Run: openssl version
	I1018 11:29:37.876524   10685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 11:29:37.887240   10685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 11:29:37.890744   10685 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 11:29:37.890816   10685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 11:29:37.924617   10685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 11:29:37.933274   10685 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 11:29:37.936877   10685 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 11:29:37.936918   10685 kubeadm.go:400] StartCluster: {Name:addons-162665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-162665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 11:29:37.936979   10685 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:29:37.937050   10685 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:29:37.962711   10685 cri.go:89] found id: ""
	I1018 11:29:37.962779   10685 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 11:29:37.970775   10685 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 11:29:37.978377   10685 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 11:29:37.978424   10685 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 11:29:37.986012   10685 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 11:29:37.986026   10685 kubeadm.go:157] found existing configuration files:
	
	I1018 11:29:37.986062   10685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 11:29:37.993442   10685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 11:29:37.993501   10685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 11:29:38.000580   10685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 11:29:38.007873   10685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 11:29:38.007943   10685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 11:29:38.014962   10685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 11:29:38.022074   10685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 11:29:38.022120   10685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 11:29:38.029103   10685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 11:29:38.036185   10685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 11:29:38.036237   10685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 11:29:38.043098   10685 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 11:29:38.077372   10685 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 11:29:38.077447   10685 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 11:29:38.097088   10685 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 11:29:38.097151   10685 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 11:29:38.097230   10685 kubeadm.go:318] OS: Linux
	I1018 11:29:38.097337   10685 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 11:29:38.097401   10685 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 11:29:38.097478   10685 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 11:29:38.097543   10685 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 11:29:38.097637   10685 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 11:29:38.097720   10685 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 11:29:38.097801   10685 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 11:29:38.097853   10685 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 11:29:38.152363   10685 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 11:29:38.152533   10685 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 11:29:38.152658   10685 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 11:29:38.159185   10685 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 11:29:38.162876   10685 out.go:252]   - Generating certificates and keys ...
	I1018 11:29:38.162965   10685 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 11:29:38.163035   10685 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 11:29:38.420295   10685 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 11:29:38.743237   10685 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 11:29:38.987498   10685 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 11:29:39.582453   10685 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 11:29:39.873093   10685 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 11:29:39.873205   10685 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-162665 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 11:29:40.131749   10685 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 11:29:40.131928   10685 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-162665 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 11:29:40.189175   10685 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 11:29:40.935217   10685 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 11:29:41.131452   10685 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 11:29:41.131549   10685 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 11:29:41.544386   10685 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 11:29:41.717583   10685 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 11:29:41.953364   10685 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 11:29:42.106975   10685 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 11:29:42.618296   10685 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 11:29:42.618839   10685 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 11:29:42.623723   10685 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 11:29:42.625608   10685 out.go:252]   - Booting up control plane ...
	I1018 11:29:42.625711   10685 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 11:29:42.625787   10685 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 11:29:42.625841   10685 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 11:29:42.638348   10685 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 11:29:42.638463   10685 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 11:29:42.644577   10685 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 11:29:42.644799   10685 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 11:29:42.644869   10685 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 11:29:42.739861   10685 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 11:29:42.740020   10685 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 11:29:43.741491   10685 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001860835s
	I1018 11:29:43.744811   10685 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 11:29:43.744925   10685 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 11:29:43.745085   10685 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 11:29:43.745198   10685 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 11:29:44.918780   10685 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.173820007s
	I1018 11:29:45.714886   10685 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.969826217s
	I1018 11:29:47.246944   10685 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501985337s
	I1018 11:29:47.257236   10685 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 11:29:47.268493   10685 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 11:29:47.277317   10685 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 11:29:47.277665   10685 kubeadm.go:318] [mark-control-plane] Marking the node addons-162665 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 11:29:47.285528   10685 kubeadm.go:318] [bootstrap-token] Using token: cvvifb.r3a9yrawhzc3ilo4
	I1018 11:29:47.286919   10685 out.go:252]   - Configuring RBAC rules ...
	I1018 11:29:47.287082   10685 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 11:29:47.290590   10685 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 11:29:47.295358   10685 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 11:29:47.297475   10685 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 11:29:47.299623   10685 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 11:29:47.301859   10685 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 11:29:47.653107   10685 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 11:29:48.068071   10685 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 11:29:48.653041   10685 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 11:29:48.654028   10685 kubeadm.go:318] 
	I1018 11:29:48.654115   10685 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 11:29:48.654129   10685 kubeadm.go:318] 
	I1018 11:29:48.654194   10685 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 11:29:48.654200   10685 kubeadm.go:318] 
	I1018 11:29:48.654220   10685 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 11:29:48.654282   10685 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 11:29:48.654333   10685 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 11:29:48.654342   10685 kubeadm.go:318] 
	I1018 11:29:48.654399   10685 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 11:29:48.654406   10685 kubeadm.go:318] 
	I1018 11:29:48.654443   10685 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 11:29:48.654449   10685 kubeadm.go:318] 
	I1018 11:29:48.654516   10685 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 11:29:48.654613   10685 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 11:29:48.654726   10685 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 11:29:48.654736   10685 kubeadm.go:318] 
	I1018 11:29:48.654895   10685 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 11:29:48.654994   10685 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 11:29:48.655005   10685 kubeadm.go:318] 
	I1018 11:29:48.655113   10685 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token cvvifb.r3a9yrawhzc3ilo4 \
	I1018 11:29:48.655247   10685 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d \
	I1018 11:29:48.655290   10685 kubeadm.go:318] 	--control-plane 
	I1018 11:29:48.655298   10685 kubeadm.go:318] 
	I1018 11:29:48.655398   10685 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 11:29:48.655408   10685 kubeadm.go:318] 
	I1018 11:29:48.655522   10685 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token cvvifb.r3a9yrawhzc3ilo4 \
	I1018 11:29:48.655707   10685 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d 
	I1018 11:29:48.657603   10685 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 11:29:48.657738   10685 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 11:29:48.657782   10685 cni.go:84] Creating CNI manager for ""
	I1018 11:29:48.657803   10685 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 11:29:48.660549   10685 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 11:29:48.661863   10685 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 11:29:48.666044   10685 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 11:29:48.666061   10685 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 11:29:48.678922   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 11:29:48.880742   10685 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 11:29:48.880852   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:48.880903   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-162665 minikube.k8s.io/updated_at=2025_10_18T11_29_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=addons-162665 minikube.k8s.io/primary=true
	I1018 11:29:48.962022   10685 ops.go:34] apiserver oom_adj: -16
	I1018 11:29:48.962162   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:49.462404   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:49.962252   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:50.463063   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:50.962776   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:51.462840   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:51.962602   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:52.462454   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:52.962453   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:53.463032   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:53.962613   10685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 11:29:54.022742   10685 kubeadm.go:1113] duration metric: took 5.141937663s to wait for elevateKubeSystemPrivileges
	I1018 11:29:54.022788   10685 kubeadm.go:402] duration metric: took 16.085872206s to StartCluster
	I1018 11:29:54.022809   10685 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:54.022921   10685 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 11:29:54.023436   10685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 11:29:54.023653   10685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 11:29:54.023662   10685 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 11:29:54.023725   10685 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 11:29:54.023869   10685 addons.go:69] Setting yakd=true in profile "addons-162665"
	I1018 11:29:54.023876   10685 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:29:54.023887   10685 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-162665"
	I1018 11:29:54.023900   10685 addons.go:69] Setting metrics-server=true in profile "addons-162665"
	I1018 11:29:54.023878   10685 addons.go:69] Setting inspektor-gadget=true in profile "addons-162665"
	I1018 11:29:54.023916   10685 addons.go:69] Setting storage-provisioner=true in profile "addons-162665"
	I1018 11:29:54.023921   10685 addons.go:69] Setting ingress-dns=true in profile "addons-162665"
	I1018 11:29:54.023927   10685 addons.go:238] Setting addon inspektor-gadget=true in "addons-162665"
	I1018 11:29:54.023936   10685 addons.go:69] Setting default-storageclass=true in profile "addons-162665"
	I1018 11:29:54.023953   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.023958   10685 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-162665"
	I1018 11:29:54.023960   10685 addons.go:238] Setting addon ingress-dns=true in "addons-162665"
	I1018 11:29:54.023972   10685 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-162665"
	I1018 11:29:54.023954   10685 addons.go:69] Setting ingress=true in profile "addons-162665"
	I1018 11:29:54.023983   10685 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-162665"
	I1018 11:29:54.024001   10685 addons.go:238] Setting addon ingress=true in "addons-162665"
	I1018 11:29:54.024016   10685 addons.go:69] Setting registry=true in profile "addons-162665"
	I1018 11:29:54.024051   10685 addons.go:69] Setting volumesnapshots=true in profile "addons-162665"
	I1018 11:29:54.024055   10685 addons.go:238] Setting addon registry=true in "addons-162665"
	I1018 11:29:54.024062   10685 addons.go:238] Setting addon volumesnapshots=true in "addons-162665"
	I1018 11:29:54.024070   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.023949   10685 addons.go:238] Setting addon metrics-server=true in "addons-162665"
	I1018 11:29:54.024081   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.024090   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.024100   10685 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-162665"
	I1018 11:29:54.024130   10685 addons.go:69] Setting cloud-spanner=true in profile "addons-162665"
	I1018 11:29:54.024148   10685 addons.go:238] Setting addon cloud-spanner=true in "addons-162665"
	I1018 11:29:54.024151   10685 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-162665"
	I1018 11:29:54.024164   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.024182   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.023887   10685 addons.go:69] Setting registry-creds=true in profile "addons-162665"
	I1018 11:29:54.024239   10685 addons.go:238] Setting addon registry-creds=true in "addons-162665"
	I1018 11:29:54.024259   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.024363   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.024509   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.024554   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.024562   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.024574   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.024598   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.024636   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.024844   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.023906   10685 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-162665"
	I1018 11:29:54.025008   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.023914   10685 addons.go:69] Setting gcp-auth=true in profile "addons-162665"
	I1018 11:29:54.025198   10685 mustload.go:65] Loading cluster: addons-162665
	I1018 11:29:54.023906   10685 addons.go:238] Setting addon yakd=true in "addons-162665"
	I1018 11:29:54.025380   10685 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:29:54.025384   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.025572   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.025622   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.024033   10685 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-162665"
	I1018 11:29:54.025911   10685 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-162665"
	I1018 11:29:54.024043   10685 addons.go:69] Setting volcano=true in profile "addons-162665"
	I1018 11:29:54.025951   10685 addons.go:238] Setting addon volcano=true in "addons-162665"
	I1018 11:29:54.025978   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.026111   10685 out.go:179] * Verifying Kubernetes components...
	I1018 11:29:54.024071   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.024001   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.024024   10685 addons.go:238] Setting addon storage-provisioner=true in "addons-162665"
	I1018 11:29:54.026708   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.024001   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.027722   10685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 11:29:54.032391   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.032453   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.035737   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.035748   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.036125   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.036820   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.038043   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.081482   10685 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 11:29:54.093872   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.094713   10685 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 11:29:54.094745   10685 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 11:29:54.095195   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.099169   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 11:29:54.100666   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 11:29:54.102497   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 11:29:54.102552   10685 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 11:29:54.103989   10685 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 11:29:54.104048   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.105272   10685 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 11:29:54.106485   10685 addons.go:238] Setting addon default-storageclass=true in "addons-162665"
	I1018 11:29:54.106538   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.107141   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.109079   10685 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 11:29:54.113494   10685 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 11:29:54.115627   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 11:29:54.115688   10685 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 11:29:54.117136   10685 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 11:29:54.119268   10685 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 11:29:54.119289   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 11:29:54.119348   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.119934   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 11:29:54.120039   10685 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 11:29:54.120268   10685 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 11:29:54.120289   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 11:29:54.120351   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.122011   10685 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 11:29:54.122087   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 11:29:54.122136   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.123264   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 11:29:54.126904   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 11:29:54.128333   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 11:29:54.130246   10685 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 11:29:54.131994   10685 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 11:29:54.132042   10685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 11:29:54.132130   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.138316   10685 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 11:29:54.138618   10685 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 11:29:54.140803   10685 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 11:29:54.141154   10685 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 11:29:54.141170   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 11:29:54.141224   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.141577   10685 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 11:29:54.141592   10685 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 11:29:54.141639   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.142554   10685 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 11:29:54.142570   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 11:29:54.142613   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	W1018 11:29:54.146840   10685 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 11:29:54.147943   10685 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-162665"
	I1018 11:29:54.147987   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:29:54.148493   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:29:54.151299   10685 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 11:29:54.151368   10685 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 11:29:54.152611   10685 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 11:29:54.152636   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 11:29:54.152686   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.153055   10685 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 11:29:54.153066   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 11:29:54.153115   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.153798   10685 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 11:29:54.155447   10685 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 11:29:54.155472   10685 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 11:29:54.155524   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.158224   10685 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 11:29:54.166352   10685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 11:29:54.166637   10685 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 11:29:54.166654   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 11:29:54.166707   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.182939   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.190029   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.190863   10685 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 11:29:54.190902   10685 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 11:29:54.190952   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.195285   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.195302   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.199413   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.208561   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.211055   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.212440   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.222695   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.227494   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.230471   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.233911   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.236439   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.244418   10685 out.go:179]   - Using image docker.io/busybox:stable
	I1018 11:29:54.249482   10685 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 11:29:54.250986   10685 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 11:29:54.251010   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 11:29:54.251067   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:29:54.255049   10685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 11:29:54.261892   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.289231   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:29:54.358544   10685 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:29:54.358564   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 11:29:54.368569   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 11:29:54.377323   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:29:54.384544   10685 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 11:29:54.384566   10685 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 11:29:54.401674   10685 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 11:29:54.401706   10685 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 11:29:54.407502   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 11:29:54.412861   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 11:29:54.415158   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 11:29:54.416517   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 11:29:54.423058   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 11:29:54.429842   10685 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 11:29:54.429869   10685 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 11:29:54.431958   10685 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 11:29:54.431984   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 11:29:54.433070   10685 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 11:29:54.433091   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 11:29:54.441833   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 11:29:54.444520   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 11:29:54.447371   10685 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 11:29:54.447393   10685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 11:29:54.455397   10685 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 11:29:54.455425   10685 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 11:29:54.455600   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 11:29:54.464528   10685 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 11:29:54.464608   10685 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 11:29:54.474654   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 11:29:54.477976   10685 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 11:29:54.478000   10685 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 11:29:54.479732   10685 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 11:29:54.479776   10685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 11:29:54.505295   10685 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 11:29:54.505333   10685 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 11:29:54.520556   10685 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 11:29:54.520585   10685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 11:29:54.533287   10685 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 11:29:54.533317   10685 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 11:29:54.547303   10685 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 11:29:54.547336   10685 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 11:29:54.548478   10685 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 11:29:54.548503   10685 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 11:29:54.555623   10685 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 11:29:54.555647   10685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 11:29:54.583743   10685 node_ready.go:35] waiting up to 6m0s for node "addons-162665" to be "Ready" ...
	I1018 11:29:54.583885   10685 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 11:29:54.593623   10685 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 11:29:54.593648   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 11:29:54.594386   10685 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 11:29:54.594407   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 11:29:54.610478   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 11:29:54.645098   10685 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 11:29:54.645129   10685 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 11:29:54.665806   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 11:29:54.681705   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 11:29:54.734633   10685 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 11:29:54.734660   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 11:29:54.800354   10685 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 11:29:54.800386   10685 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 11:29:54.870857   10685 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 11:29:54.870879   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 11:29:54.944961   10685 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 11:29:54.945004   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 11:29:55.000006   10685 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 11:29:55.000039   10685 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 11:29:55.039519   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 11:29:55.091796   10685 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-162665" context rescaled to 1 replicas
	W1018 11:29:55.318663   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:29:55.318714   10685 retry.go:31] will retry after 164.778691ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:29:55.483869   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:29:55.621916   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.205360759s)
	I1018 11:29:55.621953   10685 addons.go:479] Verifying addon ingress=true in "addons-162665"
	I1018 11:29:55.621970   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.180116155s)
	I1018 11:29:55.622153   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.177604939s)
	I1018 11:29:55.622216   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.166593387s)
	I1018 11:29:55.621911   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.198816793s)
	I1018 11:29:55.622276   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.147592807s)
	I1018 11:29:55.622291   10685 addons.go:479] Verifying addon registry=true in "addons-162665"
	I1018 11:29:55.622338   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.011826066s)
	I1018 11:29:55.622353   10685 addons.go:479] Verifying addon metrics-server=true in "addons-162665"
	I1018 11:29:55.623419   10685 out.go:179] * Verifying registry addon...
	I1018 11:29:55.623439   10685 out.go:179] * Verifying ingress addon...
	I1018 11:29:55.626594   10685 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 11:29:55.626597   10685 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1018 11:29:55.629971   10685 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1018 11:29:55.630133   10685 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 11:29:55.630148   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:29:55.630749   10685 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 11:29:55.630783   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:29:56.070581   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.404725186s)
	W1018 11:29:56.070639   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 11:29:56.070646   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.388820125s)
	I1018 11:29:56.070662   10685 retry.go:31] will retry after 254.577179ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 11:29:56.070939   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.031376956s)
	I1018 11:29:56.070964   10685 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-162665"
	I1018 11:29:56.072508   10685 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 11:29:56.072508   10685 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-162665 service yakd-dashboard -n yakd-dashboard
	
	I1018 11:29:56.074694   10685 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 11:29:56.077777   10685 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 11:29:56.077794   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:29:56.179422   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:29:56.179610   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 11:29:56.199174   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:29:56.199207   10685 retry.go:31] will retry after 435.672465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:29:56.325871   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 11:29:56.577655   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:29:56.586902   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:29:56.629337   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:29:56.629505   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:29:56.635479   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:29:57.077708   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:29:57.177788   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:29:57.177915   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:29:57.577944   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:29:57.629925   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:29:57.630114   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:29:58.078018   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:29:58.129512   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:29:58.129671   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:29:58.577916   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:29:58.587050   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:29:58.629393   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:29:58.629627   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:29:58.793715   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.467788899s)
	I1018 11:29:58.793739   10685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.158233194s)
	W1018 11:29:58.793777   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:29:58.793798   10685 retry.go:31] will retry after 507.850372ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:29:59.077406   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:29:59.178016   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:29:59.178189   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:29:59.302400   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:29:59.578560   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:29:59.629880   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:29:59.629957   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 11:29:59.827300   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:29:59.827331   10685 retry.go:31] will retry after 552.636093ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:00.078562   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:00.179571   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:00.179804   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:00.380193   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:00.578201   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:00.629214   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:00.629448   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 11:30:00.909299   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:00.909330   10685 retry.go:31] will retry after 1.024281319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:01.078311   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:01.086247   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:01.178695   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:01.178785   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:01.578548   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:01.629374   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:01.629494   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:01.709358   10685 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 11:30:01.709435   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:30:01.728175   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:30:01.830372   10685 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 11:30:01.843867   10685 addons.go:238] Setting addon gcp-auth=true in "addons-162665"
	I1018 11:30:01.843915   10685 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:30:01.844336   10685 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:30:01.862838   10685 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 11:30:01.862893   10685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:30:01.879818   10685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:30:01.934052   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:02.078501   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:02.129906   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:02.130068   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 11:30:02.467999   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:02.468027   10685 retry.go:31] will retry after 1.246367926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:02.470364   10685 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 11:30:02.471728   10685 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 11:30:02.472876   10685 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 11:30:02.472894   10685 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 11:30:02.486493   10685 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 11:30:02.486514   10685 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 11:30:02.499907   10685 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 11:30:02.499931   10685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 11:30:02.512905   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 11:30:02.577631   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:02.629833   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:02.630082   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:02.815867   10685 addons.go:479] Verifying addon gcp-auth=true in "addons-162665"
	I1018 11:30:02.817243   10685 out.go:179] * Verifying gcp-auth addon...
	I1018 11:30:02.819528   10685 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 11:30:02.821713   10685 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 11:30:02.821730   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:03.077909   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:03.086974   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:03.178458   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:03.178597   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:03.322436   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:03.577324   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:03.632325   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:03.632547   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:03.714535   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:03.822858   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:04.079024   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:04.130228   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:04.130406   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 11:30:04.245658   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:04.245690   10685 retry.go:31] will retry after 2.529964576s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:04.322019   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:04.577719   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:04.629212   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:04.629615   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:04.822886   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:05.077970   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:05.087113   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:05.129649   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:05.129712   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:05.322027   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:05.577941   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:05.629348   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:05.629499   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:05.822996   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:06.077645   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:06.130100   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:06.130138   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:06.322569   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:06.577605   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:06.630080   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:06.630121   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:06.776303   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:06.822096   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:07.078503   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:07.129967   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:07.130123   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 11:30:07.296343   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:07.296380   10685 retry.go:31] will retry after 4.158681311s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:07.323060   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:07.577912   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:07.586141   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:07.630081   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:07.630328   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:07.822649   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:08.077379   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:08.130341   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:08.130387   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:08.323125   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:08.577915   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:08.629372   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:08.629611   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:08.822117   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:09.078122   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:09.129872   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:09.129944   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:09.322364   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:09.578241   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:09.586391   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:09.629850   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:09.629999   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:09.822613   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:10.077549   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:10.129190   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:10.129346   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:10.322997   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:10.577579   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:10.629650   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:10.629753   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:10.822246   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:11.078298   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:11.129994   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:11.130035   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:11.322981   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:11.456227   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:11.578256   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:11.586561   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:11.630737   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:11.631095   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:11.821756   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 11:30:11.986324   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:11.986354   10685 retry.go:31] will retry after 4.005862643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:12.077700   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:12.129592   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:12.129616   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:12.321991   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:12.577855   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:12.629446   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:12.629541   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:12.823022   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:13.078151   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:13.130153   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:13.130411   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:13.322685   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:13.577627   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:13.586818   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:13.629359   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:13.629489   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:13.821898   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:14.077249   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:14.129122   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:14.129213   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:14.322802   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:14.577360   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:14.629003   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:14.629238   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:14.822802   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:15.077694   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:15.129965   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:15.130009   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:15.322499   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:15.578072   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:15.587154   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:15.629517   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:15.629664   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:15.821837   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:15.992973   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:16.078179   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:16.129892   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:16.130074   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:16.322870   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 11:30:16.527260   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:16.527295   10685 retry.go:31] will retry after 8.183681212s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:16.577988   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:16.629885   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:16.629968   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:16.822136   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:17.078184   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:17.129426   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:17.129596   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:17.322182   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:17.577724   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:17.629524   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:17.629739   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:17.821953   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:18.077740   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:18.086938   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:18.129610   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:18.129846   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:18.321937   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:18.578081   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:18.630092   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:18.630153   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:18.823073   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:19.078170   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:19.129926   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:19.130022   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:19.322444   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:19.578266   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:19.629677   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:19.629858   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:19.822188   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:20.077909   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:20.087077   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:20.129619   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:20.129733   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:20.322189   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:20.577976   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:20.629385   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:20.629569   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:20.821976   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:21.077838   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:21.129256   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:21.129478   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:21.322699   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:21.581091   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:21.630104   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:21.630207   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:21.822770   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:22.077348   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:22.129838   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:22.129891   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:22.322415   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:22.578056   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:22.586348   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:22.629633   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:22.629806   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:22.822549   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:23.078257   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:23.130134   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:23.130322   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:23.322600   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:23.577552   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:23.629570   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:23.629631   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:23.821974   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:24.077697   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:24.129751   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:24.129819   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:24.322398   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:24.578294   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:24.586430   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:24.629902   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:24.630000   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:24.712006   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:24.823227   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:25.077509   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:25.129821   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:25.130032   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 11:30:25.239871   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:25.239902   10685 retry.go:31] will retry after 21.38616268s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:25.322395   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:25.578176   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:25.629673   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:25.629933   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:25.822526   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:26.077851   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:26.129620   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:26.129919   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:26.321992   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:26.577678   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:26.586872   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:26.629265   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:26.629446   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:26.822996   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:27.077866   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:27.129638   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:27.129868   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:27.322561   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:27.577249   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:27.630107   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:27.630242   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:27.822741   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:28.077280   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:28.129975   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:28.130187   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:28.322620   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:28.577625   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:28.587117   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:28.631544   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:28.631908   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:28.822289   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:29.077707   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:29.131295   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:29.131443   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:29.322949   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:29.577692   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:29.629656   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:29.629717   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:29.822206   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:30.077860   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:30.129804   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:30.129936   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:30.322537   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:30.577996   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:30.629674   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:30.629806   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:30.822385   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:31.078197   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:31.086372   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:31.129746   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:31.129964   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:31.322308   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:31.577947   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:31.629792   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:31.629897   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:31.822805   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:32.077438   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:32.130288   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:32.130447   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:32.323084   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:32.578053   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:32.629865   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:32.629906   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:32.822796   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:33.077426   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:30:33.088504   10685 node_ready.go:57] node "addons-162665" has "Ready":"False" status (will retry)
	I1018 11:30:33.129958   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:33.130156   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:33.322630   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:33.577453   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:33.629164   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:33.629322   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:33.822715   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:34.077534   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:34.129602   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:34.129682   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:34.322143   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:34.577993   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:34.629558   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:34.629851   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:34.822300   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:35.078058   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:35.129739   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:35.129871   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:35.324276   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:35.578227   10685 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 11:30:35.578247   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:35.586137   10685 node_ready.go:49] node "addons-162665" is "Ready"
	I1018 11:30:35.586165   10685 node_ready.go:38] duration metric: took 41.002375212s for node "addons-162665" to be "Ready" ...
	I1018 11:30:35.586180   10685 api_server.go:52] waiting for apiserver process to appear ...
	I1018 11:30:35.586233   10685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 11:30:35.601873   10685 api_server.go:72] duration metric: took 41.57817834s to wait for apiserver process to appear ...
	I1018 11:30:35.601909   10685 api_server.go:88] waiting for apiserver healthz status ...
	I1018 11:30:35.601930   10685 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 11:30:35.606743   10685 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 11:30:35.607741   10685 api_server.go:141] control plane version: v1.34.1
	I1018 11:30:35.607774   10685 api_server.go:131] duration metric: took 5.857346ms to wait for apiserver health ...
	I1018 11:30:35.607787   10685 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 11:30:35.679473   10685 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 11:30:35.679500   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:35.681032   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:35.682041   10685 system_pods.go:59] 20 kube-system pods found
	I1018 11:30:35.682076   10685 system_pods.go:61] "amd-gpu-device-plugin-qtz57" [7718c757-52e9-4c21-8387-b22e46dbd672] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 11:30:35.682086   10685 system_pods.go:61] "coredns-66bc5c9577-dd8db" [9e860bf0-8080-4685-be57-8e4372d70758] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 11:30:35.682100   10685 system_pods.go:61] "csi-hostpath-attacher-0" [808c9abd-09ef-4a82-a9b0-40e0b5583c62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 11:30:35.682108   10685 system_pods.go:61] "csi-hostpath-resizer-0" [5fc9ea30-c6c5-4b52-801e-6f6744fcb45b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 11:30:35.682117   10685 system_pods.go:61] "csi-hostpathplugin-vd8h9" [8084337b-ce37-4904-b2d8-f9d98bec885a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 11:30:35.682122   10685 system_pods.go:61] "etcd-addons-162665" [985d8d51-a9b4-4613-8496-616cbbc9ba77] Running
	I1018 11:30:35.682127   10685 system_pods.go:61] "kindnet-chh44" [c8dd40f2-5d47-4163-a0f5-b4a42c683205] Running
	I1018 11:30:35.682132   10685 system_pods.go:61] "kube-apiserver-addons-162665" [b0263b5e-10dd-451f-a711-eafcf586b058] Running
	I1018 11:30:35.682136   10685 system_pods.go:61] "kube-controller-manager-addons-162665" [602b205c-f553-44c4-b952-749da212d7fc] Running
	I1018 11:30:35.682144   10685 system_pods.go:61] "kube-ingress-dns-minikube" [448dbfd9-bfeb-46dd-b9d4-8223a2d0208b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 11:30:35.682151   10685 system_pods.go:61] "kube-proxy-952nl" [d7c98ee8-f772-4ace-9296-8ed60510d4c6] Running
	I1018 11:30:35.682156   10685 system_pods.go:61] "kube-scheduler-addons-162665" [ad5158d7-dd62-4cf1-b936-323a01c48bea] Running
	I1018 11:30:35.682164   10685 system_pods.go:61] "metrics-server-85b7d694d7-4fbgz" [7862dfcb-3720-49c5-a912-e836d1598eaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 11:30:35.682172   10685 system_pods.go:61] "nvidia-device-plugin-daemonset-l95vf" [4c8e1e2a-6ab0-4cde-8847-b7cdf5b01ab4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 11:30:35.682181   10685 system_pods.go:61] "registry-6b586f9694-8ns6k" [c800a208-4e00-4ea5-bacc-ab4677684b88] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 11:30:35.682190   10685 system_pods.go:61] "registry-creds-764b6fb674-hx56w" [b711b8e2-3d97-490b-bb1b-e5272a73c7bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 11:30:35.682199   10685 system_pods.go:61] "registry-proxy-tsk7w" [34d517d6-de7d-42f2-88d2-ae400f0fce9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 11:30:35.682221   10685 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mhxbb" [e43d99f8-e9e2-4f3b-9b80-7b05e4c365db] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 11:30:35.682231   10685 system_pods.go:61] "snapshot-controller-7d9fbc56b8-q4cgf" [f5e34437-83ad-4871-83fc-22cf1c594cc6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 11:30:35.682238   10685 system_pods.go:61] "storage-provisioner" [757a0a21-65a5-42b5-8599-5bad27d50df7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 11:30:35.682247   10685 system_pods.go:74] duration metric: took 74.451132ms to wait for pod list to return data ...
	I1018 11:30:35.682258   10685 default_sa.go:34] waiting for default service account to be created ...
	I1018 11:30:35.687383   10685 default_sa.go:45] found service account: "default"
	I1018 11:30:35.687416   10685 default_sa.go:55] duration metric: took 5.15054ms for default service account to be created ...
	I1018 11:30:35.687428   10685 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 11:30:35.781236   10685 system_pods.go:86] 20 kube-system pods found
	I1018 11:30:35.781268   10685 system_pods.go:89] "amd-gpu-device-plugin-qtz57" [7718c757-52e9-4c21-8387-b22e46dbd672] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 11:30:35.781275   10685 system_pods.go:89] "coredns-66bc5c9577-dd8db" [9e860bf0-8080-4685-be57-8e4372d70758] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 11:30:35.781281   10685 system_pods.go:89] "csi-hostpath-attacher-0" [808c9abd-09ef-4a82-a9b0-40e0b5583c62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 11:30:35.781287   10685 system_pods.go:89] "csi-hostpath-resizer-0" [5fc9ea30-c6c5-4b52-801e-6f6744fcb45b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 11:30:35.781292   10685 system_pods.go:89] "csi-hostpathplugin-vd8h9" [8084337b-ce37-4904-b2d8-f9d98bec885a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 11:30:35.781297   10685 system_pods.go:89] "etcd-addons-162665" [985d8d51-a9b4-4613-8496-616cbbc9ba77] Running
	I1018 11:30:35.781302   10685 system_pods.go:89] "kindnet-chh44" [c8dd40f2-5d47-4163-a0f5-b4a42c683205] Running
	I1018 11:30:35.781308   10685 system_pods.go:89] "kube-apiserver-addons-162665" [b0263b5e-10dd-451f-a711-eafcf586b058] Running
	I1018 11:30:35.781311   10685 system_pods.go:89] "kube-controller-manager-addons-162665" [602b205c-f553-44c4-b952-749da212d7fc] Running
	I1018 11:30:35.781317   10685 system_pods.go:89] "kube-ingress-dns-minikube" [448dbfd9-bfeb-46dd-b9d4-8223a2d0208b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 11:30:35.781320   10685 system_pods.go:89] "kube-proxy-952nl" [d7c98ee8-f772-4ace-9296-8ed60510d4c6] Running
	I1018 11:30:35.781324   10685 system_pods.go:89] "kube-scheduler-addons-162665" [ad5158d7-dd62-4cf1-b936-323a01c48bea] Running
	I1018 11:30:35.781330   10685 system_pods.go:89] "metrics-server-85b7d694d7-4fbgz" [7862dfcb-3720-49c5-a912-e836d1598eaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 11:30:35.781343   10685 system_pods.go:89] "nvidia-device-plugin-daemonset-l95vf" [4c8e1e2a-6ab0-4cde-8847-b7cdf5b01ab4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 11:30:35.781350   10685 system_pods.go:89] "registry-6b586f9694-8ns6k" [c800a208-4e00-4ea5-bacc-ab4677684b88] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 11:30:35.781357   10685 system_pods.go:89] "registry-creds-764b6fb674-hx56w" [b711b8e2-3d97-490b-bb1b-e5272a73c7bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 11:30:35.781369   10685 system_pods.go:89] "registry-proxy-tsk7w" [34d517d6-de7d-42f2-88d2-ae400f0fce9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 11:30:35.781380   10685 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mhxbb" [e43d99f8-e9e2-4f3b-9b80-7b05e4c365db] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 11:30:35.781393   10685 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q4cgf" [f5e34437-83ad-4871-83fc-22cf1c594cc6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 11:30:35.781400   10685 system_pods.go:89] "storage-provisioner" [757a0a21-65a5-42b5-8599-5bad27d50df7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 11:30:35.781420   10685 retry.go:31] will retry after 284.500839ms: missing components: kube-dns
	I1018 11:30:35.821951   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:36.070792   10685 system_pods.go:86] 20 kube-system pods found
	I1018 11:30:36.070832   10685 system_pods.go:89] "amd-gpu-device-plugin-qtz57" [7718c757-52e9-4c21-8387-b22e46dbd672] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 11:30:36.070841   10685 system_pods.go:89] "coredns-66bc5c9577-dd8db" [9e860bf0-8080-4685-be57-8e4372d70758] Running
	I1018 11:30:36.070860   10685 system_pods.go:89] "csi-hostpath-attacher-0" [808c9abd-09ef-4a82-a9b0-40e0b5583c62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 11:30:36.070870   10685 system_pods.go:89] "csi-hostpath-resizer-0" [5fc9ea30-c6c5-4b52-801e-6f6744fcb45b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 11:30:36.070884   10685 system_pods.go:89] "csi-hostpathplugin-vd8h9" [8084337b-ce37-4904-b2d8-f9d98bec885a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 11:30:36.070893   10685 system_pods.go:89] "etcd-addons-162665" [985d8d51-a9b4-4613-8496-616cbbc9ba77] Running
	I1018 11:30:36.070899   10685 system_pods.go:89] "kindnet-chh44" [c8dd40f2-5d47-4163-a0f5-b4a42c683205] Running
	I1018 11:30:36.070903   10685 system_pods.go:89] "kube-apiserver-addons-162665" [b0263b5e-10dd-451f-a711-eafcf586b058] Running
	I1018 11:30:36.070912   10685 system_pods.go:89] "kube-controller-manager-addons-162665" [602b205c-f553-44c4-b952-749da212d7fc] Running
	I1018 11:30:36.070923   10685 system_pods.go:89] "kube-ingress-dns-minikube" [448dbfd9-bfeb-46dd-b9d4-8223a2d0208b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 11:30:36.070932   10685 system_pods.go:89] "kube-proxy-952nl" [d7c98ee8-f772-4ace-9296-8ed60510d4c6] Running
	I1018 11:30:36.070938   10685 system_pods.go:89] "kube-scheduler-addons-162665" [ad5158d7-dd62-4cf1-b936-323a01c48bea] Running
	I1018 11:30:36.070945   10685 system_pods.go:89] "metrics-server-85b7d694d7-4fbgz" [7862dfcb-3720-49c5-a912-e836d1598eaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 11:30:36.070960   10685 system_pods.go:89] "nvidia-device-plugin-daemonset-l95vf" [4c8e1e2a-6ab0-4cde-8847-b7cdf5b01ab4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 11:30:36.070969   10685 system_pods.go:89] "registry-6b586f9694-8ns6k" [c800a208-4e00-4ea5-bacc-ab4677684b88] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 11:30:36.070977   10685 system_pods.go:89] "registry-creds-764b6fb674-hx56w" [b711b8e2-3d97-490b-bb1b-e5272a73c7bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 11:30:36.070984   10685 system_pods.go:89] "registry-proxy-tsk7w" [34d517d6-de7d-42f2-88d2-ae400f0fce9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 11:30:36.070991   10685 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mhxbb" [e43d99f8-e9e2-4f3b-9b80-7b05e4c365db] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 11:30:36.071000   10685 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q4cgf" [f5e34437-83ad-4871-83fc-22cf1c594cc6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 11:30:36.071005   10685 system_pods.go:89] "storage-provisioner" [757a0a21-65a5-42b5-8599-5bad27d50df7] Running
	I1018 11:30:36.071017   10685 system_pods.go:126] duration metric: took 383.58023ms to wait for k8s-apps to be running ...
	I1018 11:30:36.071030   10685 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 11:30:36.071080   10685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 11:30:36.079343   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:36.088246   10685 system_svc.go:56] duration metric: took 17.204463ms WaitForService to wait for kubelet
	I1018 11:30:36.088283   10685 kubeadm.go:586] duration metric: took 42.064592936s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 11:30:36.088307   10685 node_conditions.go:102] verifying NodePressure condition ...
	I1018 11:30:36.091198   10685 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 11:30:36.091250   10685 node_conditions.go:123] node cpu capacity is 8
	I1018 11:30:36.091267   10685 node_conditions.go:105] duration metric: took 2.954423ms to run NodePressure ...
	I1018 11:30:36.091283   10685 start.go:241] waiting for startup goroutines ...
	I1018 11:30:36.130785   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:36.130988   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:36.322887   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:36.577678   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:36.630472   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:36.630752   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:36.823191   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:37.078035   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:37.129547   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:37.129591   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:37.322573   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:37.578800   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:37.630491   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:37.630514   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:37.825714   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:38.078187   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:38.129797   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:38.130659   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:38.324232   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:38.578433   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:38.630580   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:38.630705   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:38.823693   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:39.078059   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:39.178397   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:39.178433   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:39.321916   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:39.577694   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:39.630903   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:39.631084   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:39.824620   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:40.079381   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:40.131312   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:40.132700   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:40.322442   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:40.578748   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:40.630493   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:40.630571   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:40.823511   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:41.078169   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:41.130219   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:41.130324   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:41.322936   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:41.577432   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:41.630398   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:41.630419   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:41.823631   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:42.077942   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:42.129479   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:42.129522   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:42.323306   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:42.578690   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:42.630474   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:42.630916   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:42.822639   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:43.199474   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:43.199658   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:43.199799   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:43.328046   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:43.578451   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:43.631691   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:43.631728   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:43.823343   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:44.077860   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:44.130715   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:44.130749   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:44.322640   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:44.578127   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:44.630002   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:44.630026   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:44.822903   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:45.078100   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:45.178834   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:45.178934   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:45.323128   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:45.578853   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:45.630514   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:45.630524   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:45.823552   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:46.078548   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:46.179819   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:46.179881   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:46.322986   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:46.578398   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:46.626472   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:30:46.629796   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:46.629862   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:46.822305   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:47.079932   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:47.184752   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:47.184752   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:47.322873   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 11:30:47.335115   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:47.335147   10685 retry.go:31] will retry after 13.56763526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:30:47.579105   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:47.629698   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:47.629853   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:47.823624   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:48.078054   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:48.129823   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:48.129844   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:48.322068   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:48.578104   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:48.629663   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:48.629692   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:48.822018   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:49.078990   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:49.132982   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:49.133154   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:49.323573   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:49.579047   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:49.632107   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:49.632836   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:49.825441   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:50.078566   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:50.131079   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:50.131184   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:50.322949   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:50.578291   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:50.630697   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:50.630994   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:50.822308   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:51.079541   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:51.130752   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:51.131330   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:51.323656   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:51.577656   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:51.630673   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:51.630800   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:51.823252   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:52.078593   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:52.130661   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:52.130678   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:52.322438   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:52.629355   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:52.629411   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:52.629506   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:52.823412   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:53.078846   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:53.130718   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:53.130843   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:53.322236   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:53.578517   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:53.630284   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:53.630465   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:53.823741   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:54.078098   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:54.130357   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:54.130530   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:54.322315   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:54.578446   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:54.630325   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:54.630505   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:54.823310   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:55.078498   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:55.130430   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:55.130474   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:55.323020   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:55.578328   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:55.629837   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:55.629932   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:55.822563   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:56.077596   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:56.129951   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:56.130113   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:56.322914   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:56.577895   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:56.629800   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:56.629888   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:56.822465   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:57.078541   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:57.177081   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:57.177177   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:57.349648   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:57.578175   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:57.630378   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:57.630681   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:57.822631   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:58.078505   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:58.131215   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:58.131720   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:58.322934   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:58.579108   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:58.630032   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:58.630094   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:58.822629   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:59.079054   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:59.130546   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:59.130584   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:59.361021   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:30:59.578103   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:30:59.679102   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:30:59.679209   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:30:59.822535   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:00.079153   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:00.130026   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:00.130073   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:00.323623   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:00.577924   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:00.629539   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:00.629565   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:00.823110   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:00.903177   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 11:31:01.078646   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:01.130607   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:01.130666   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:01.323258   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:01.577951   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 11:31:01.586146   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:31:01.586181   10685 retry.go:31] will retry after 16.904479278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 11:31:01.630257   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:01.630304   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:01.823153   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:02.078259   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:02.129689   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:02.129847   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:02.322532   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:02.578689   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:02.630337   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:02.630351   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:02.823408   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:03.078868   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:03.129295   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:03.129335   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:03.323701   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:03.578485   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:03.678683   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:03.678723   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:03.823251   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:04.080878   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:04.130948   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:04.131775   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 11:31:04.335517   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:04.579485   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:04.631325   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:04.631331   10685 kapi.go:107] duration metric: took 1m9.004733027s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 11:31:04.822175   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:05.078698   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:05.129887   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:05.323177   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:05.579177   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:05.630046   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:05.822685   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:06.078027   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:06.130100   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:06.322836   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:06.577072   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:06.629629   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:06.822157   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:07.078472   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:07.130559   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:07.322569   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:07.581138   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:07.629859   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:07.826970   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:08.078356   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:08.130409   10685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 11:31:08.323368   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:08.578751   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:08.630941   10685 kapi.go:107] duration metric: took 1m13.004341926s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 11:31:08.975652   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:09.106753   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:09.322332   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:09.578257   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:09.823087   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:10.078698   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:10.324310   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:10.578297   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:10.822876   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:11.078384   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:11.323411   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:11.578864   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:11.822644   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 11:31:12.077620   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:12.323656   10685 kapi.go:107] duration metric: took 1m9.504126071s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 11:31:12.326649   10685 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-162665 cluster.
	I1018 11:31:12.328108   10685 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 11:31:12.330055   10685 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 11:31:12.579338   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:13.078821   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:13.578399   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:14.078273   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:14.577472   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:15.078191   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:15.578071   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:16.078729   10685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 11:31:16.578732   10685 kapi.go:107] duration metric: took 1m20.504038466s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 11:31:18.492530   10685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 11:31:19.018892   10685 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 11:31:19.018996   10685 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 11:31:19.020983   10685 out.go:179] * Enabled addons: registry-creds, ingress-dns, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, nvidia-device-plugin, metrics-server, default-storageclass, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1018 11:31:19.022142   10685 addons.go:514] duration metric: took 1m24.998418872s for enable addons: enabled=[registry-creds ingress-dns amd-gpu-device-plugin storage-provisioner cloud-spanner nvidia-device-plugin metrics-server default-storageclass yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1018 11:31:19.022178   10685 start.go:246] waiting for cluster config update ...
	I1018 11:31:19.022199   10685 start.go:255] writing updated cluster config ...
	I1018 11:31:19.022445   10685 ssh_runner.go:195] Run: rm -f paused
	I1018 11:31:19.026326   10685 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 11:31:19.029476   10685 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dd8db" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:19.033303   10685 pod_ready.go:94] pod "coredns-66bc5c9577-dd8db" is "Ready"
	I1018 11:31:19.033330   10685 pod_ready.go:86] duration metric: took 3.836571ms for pod "coredns-66bc5c9577-dd8db" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:19.035007   10685 pod_ready.go:83] waiting for pod "etcd-addons-162665" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:19.038206   10685 pod_ready.go:94] pod "etcd-addons-162665" is "Ready"
	I1018 11:31:19.038224   10685 pod_ready.go:86] duration metric: took 3.199968ms for pod "etcd-addons-162665" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:19.039930   10685 pod_ready.go:83] waiting for pod "kube-apiserver-addons-162665" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:19.043251   10685 pod_ready.go:94] pod "kube-apiserver-addons-162665" is "Ready"
	I1018 11:31:19.043270   10685 pod_ready.go:86] duration metric: took 3.322227ms for pod "kube-apiserver-addons-162665" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:19.044906   10685 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-162665" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:19.430249   10685 pod_ready.go:94] pod "kube-controller-manager-addons-162665" is "Ready"
	I1018 11:31:19.430282   10685 pod_ready.go:86] duration metric: took 385.356512ms for pod "kube-controller-manager-addons-162665" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:19.630475   10685 pod_ready.go:83] waiting for pod "kube-proxy-952nl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:20.030063   10685 pod_ready.go:94] pod "kube-proxy-952nl" is "Ready"
	I1018 11:31:20.030092   10685 pod_ready.go:86] duration metric: took 399.586435ms for pod "kube-proxy-952nl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:20.230308   10685 pod_ready.go:83] waiting for pod "kube-scheduler-addons-162665" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:20.629921   10685 pod_ready.go:94] pod "kube-scheduler-addons-162665" is "Ready"
	I1018 11:31:20.629945   10685 pod_ready.go:86] duration metric: took 399.610694ms for pod "kube-scheduler-addons-162665" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 11:31:20.629956   10685 pod_ready.go:40] duration metric: took 1.603609293s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 11:31:20.673677   10685 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 11:31:20.675723   10685 out.go:179] * Done! kubectl is now configured to use "addons-162665" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 11:31:15 addons-162665 crio[773]: time="2025-10-18T11:31:15.575932535Z" level=info msg="Starting container: 488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb" id=139d7a64-ca88-46a0-b054-2ece7077f7bd name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 11:31:15 addons-162665 crio[773]: time="2025-10-18T11:31:15.579358263Z" level=info msg="Started container" PID=6250 containerID=488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb description=kube-system/csi-hostpathplugin-vd8h9/csi-snapshotter id=139d7a64-ca88-46a0-b054-2ece7077f7bd name=/runtime.v1.RuntimeService/StartContainer sandboxID=2fd235451945960dbf718dc7180fbbf100b1f38c43ab92f122a58081db8b5313
	Oct 18 11:31:21 addons-162665 crio[773]: time="2025-10-18T11:31:21.572795687Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8f0a48dc-46e9-4c68-80b8-f2041a4a0837 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 11:31:21 addons-162665 crio[773]: time="2025-10-18T11:31:21.572883472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 11:31:21 addons-162665 crio[773]: time="2025-10-18T11:31:21.578282405Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b5469d09f8566fd53b08c52d3da4906cfb601c97820c9524fcf85ba8652097d1 UID:63e62b2d-6b2a-4e68-be20-6ccd92ea0265 NetNS:/var/run/netns/733f70f6-f7c5-473e-ad78-f224d63439d1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128e78}] Aliases:map[]}"
	Oct 18 11:31:21 addons-162665 crio[773]: time="2025-10-18T11:31:21.578314963Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 11:31:21 addons-162665 crio[773]: time="2025-10-18T11:31:21.588417138Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b5469d09f8566fd53b08c52d3da4906cfb601c97820c9524fcf85ba8652097d1 UID:63e62b2d-6b2a-4e68-be20-6ccd92ea0265 NetNS:/var/run/netns/733f70f6-f7c5-473e-ad78-f224d63439d1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128e78}] Aliases:map[]}"
	Oct 18 11:31:21 addons-162665 crio[773]: time="2025-10-18T11:31:21.588544986Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 11:31:21 addons-162665 crio[773]: time="2025-10-18T11:31:21.589371933Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 11:31:21 addons-162665 crio[773]: time="2025-10-18T11:31:21.590203111Z" level=info msg="Ran pod sandbox b5469d09f8566fd53b08c52d3da4906cfb601c97820c9524fcf85ba8652097d1 with infra container: default/busybox/POD" id=8f0a48dc-46e9-4c68-80b8-f2041a4a0837 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 11:31:21 addons-162665 crio[773]: time="2025-10-18T11:31:21.591287534Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fd0e83d4-ac21-4edd-a6d8-5fefe2a7aa05 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 11:31:21 addons-162665 crio[773]: time="2025-10-18T11:31:21.591398714Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=fd0e83d4-ac21-4edd-a6d8-5fefe2a7aa05 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 11:31:21 addons-162665 crio[773]: time="2025-10-18T11:31:21.591428072Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=fd0e83d4-ac21-4edd-a6d8-5fefe2a7aa05 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 11:31:21 addons-162665 crio[773]: time="2025-10-18T11:31:21.591967394Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=acdaa58c-e519-41c2-ae47-a1fd99e68bc8 name=/runtime.v1.ImageService/PullImage
	Oct 18 11:31:21 addons-162665 crio[773]: time="2025-10-18T11:31:21.593314936Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 11:31:22 addons-162665 crio[773]: time="2025-10-18T11:31:22.817853039Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=acdaa58c-e519-41c2-ae47-a1fd99e68bc8 name=/runtime.v1.ImageService/PullImage
	Oct 18 11:31:22 addons-162665 crio[773]: time="2025-10-18T11:31:22.818429031Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e30e647f-4d32-4ebc-b244-1c6128883482 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 11:31:22 addons-162665 crio[773]: time="2025-10-18T11:31:22.819815567Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fca0d675-23a4-48e5-80b5-48b4550ac7ab name=/runtime.v1.ImageService/ImageStatus
	Oct 18 11:31:22 addons-162665 crio[773]: time="2025-10-18T11:31:22.823339839Z" level=info msg="Creating container: default/busybox/busybox" id=d914aaf8-2d87-4dc0-9c11-5ecb52f8ad74 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 11:31:22 addons-162665 crio[773]: time="2025-10-18T11:31:22.823950095Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 11:31:22 addons-162665 crio[773]: time="2025-10-18T11:31:22.828984038Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 11:31:22 addons-162665 crio[773]: time="2025-10-18T11:31:22.829374493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 11:31:22 addons-162665 crio[773]: time="2025-10-18T11:31:22.857940842Z" level=info msg="Created container 993a2b10e202621e217074bfb1f0bce1b0ea22325d26bfccafbf30bbfd027449: default/busybox/busybox" id=d914aaf8-2d87-4dc0-9c11-5ecb52f8ad74 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 11:31:22 addons-162665 crio[773]: time="2025-10-18T11:31:22.858601793Z" level=info msg="Starting container: 993a2b10e202621e217074bfb1f0bce1b0ea22325d26bfccafbf30bbfd027449" id=3e8c46e3-2be9-41bb-8036-22ff7d7d8430 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 11:31:22 addons-162665 crio[773]: time="2025-10-18T11:31:22.860407898Z" level=info msg="Started container" PID=6380 containerID=993a2b10e202621e217074bfb1f0bce1b0ea22325d26bfccafbf30bbfd027449 description=default/busybox/busybox id=3e8c46e3-2be9-41bb-8036-22ff7d7d8430 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5469d09f8566fd53b08c52d3da4906cfb601c97820c9524fcf85ba8652097d1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	993a2b10e2026       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   b5469d09f8566       busybox                                     default
	488c15000b978       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          15 seconds ago       Running             csi-snapshotter                          0                   2fd2354519459       csi-hostpathplugin-vd8h9                    kube-system
	a27fdd7026b29       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          16 seconds ago       Running             csi-provisioner                          0                   2fd2354519459       csi-hostpathplugin-vd8h9                    kube-system
	e58b8a219585a       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            17 seconds ago       Running             liveness-probe                           0                   2fd2354519459       csi-hostpathplugin-vd8h9                    kube-system
	80ee1a432463a       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           18 seconds ago       Running             hostpath                                 0                   2fd2354519459       csi-hostpathplugin-vd8h9                    kube-system
	d539fd7cbcbbe       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 19 seconds ago       Running             gcp-auth                                 0                   7c6d96b73cbd1       gcp-auth-78565c9fb4-kr9d8                   gcp-auth
	1c7e5acf2100a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                20 seconds ago       Running             node-driver-registrar                    0                   2fd2354519459       csi-hostpathplugin-vd8h9                    kube-system
	46ebf17b2eaaa       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            21 seconds ago       Running             gadget                                   0                   9c27e42afb04e       gadget-vscpb                                gadget
	fe24ec6bccde8       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             23 seconds ago       Running             controller                               0                   36ca410debc3e       ingress-nginx-controller-675c5ddd98-splxz   ingress-nginx
	43a9f95eacc82       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              27 seconds ago       Running             registry-proxy                           0                   e5d84b0a13043       registry-proxy-tsk7w                        kube-system
	7f162f04036aa       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              29 seconds ago       Running             csi-resizer                              0                   e45622ea7a09f       csi-hostpath-resizer-0                      kube-system
	e1d070dd7f484       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             29 seconds ago       Exited              patch                                    1                   d8424c2231522       gcp-auth-certs-patch-nbpz5                  gcp-auth
	54cfac14c2370       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   30 seconds ago       Exited              create                                   0                   ac6f7a535ea84       gcp-auth-certs-create-nbchn                 gcp-auth
	763f4d62397d6       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   30 seconds ago       Running             csi-external-health-monitor-controller   0                   2fd2354519459       csi-hostpathplugin-vd8h9                    kube-system
	2ab0798158fad       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              31 seconds ago       Running             yakd                                     0                   c383cce8bc50d       yakd-dashboard-5ff678cb9-8jpkg              yakd-dashboard
	230e9f4fd3747       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             33 seconds ago       Running             csi-attacher                             0                   406ae14baf268       csi-hostpath-attacher-0                     kube-system
	98ea2b43ee1f9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      34 seconds ago       Running             volume-snapshot-controller               0                   d8d2220e2dc31       snapshot-controller-7d9fbc56b8-mhxbb        kube-system
	6055994c2f9ad       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   35 seconds ago       Exited              patch                                    0                   bdfe716adc399       ingress-nginx-admission-patch-d4dp5         ingress-nginx
	c47f2661c7342       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     35 seconds ago       Running             nvidia-device-plugin-ctr                 0                   528e1befe732d       nvidia-device-plugin-daemonset-l95vf        kube-system
	7da1e14278c12       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     43 seconds ago       Running             amd-gpu-device-plugin                    0                   3c07f98cb1613       amd-gpu-device-plugin-qtz57                 kube-system
	03c9856418e49       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      43 seconds ago       Running             volume-snapshot-controller               0                   03d1a2af1b7a0       snapshot-controller-7d9fbc56b8-q4cgf        kube-system
	66eeb7fe3345b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   45 seconds ago       Exited              create                                   0                   26df9f77bbc31       ingress-nginx-admission-create-g2s9g        ingress-nginx
	2d9dfc50ea0d7       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        45 seconds ago       Running             metrics-server                           0                   4fb3295698524       metrics-server-85b7d694d7-4fbgz             kube-system
	86f0ff52ac8ce       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             46 seconds ago       Running             local-path-provisioner                   0                   01402d2be55e1       local-path-provisioner-648f6765c9-mrfgl     local-path-storage
	f9c877c63013c       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               47 seconds ago       Running             minikube-ingress-dns                     0                   0d41833c8a2fb       kube-ingress-dns-minikube                   kube-system
	24f62efb65dfc       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               51 seconds ago       Running             cloud-spanner-emulator                   0                   3aa84b61ab0a1       cloud-spanner-emulator-86bd5cbb97-rmg8m     default
	07d2ff78db059       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           54 seconds ago       Running             registry                                 0                   15d2ba2abafd7       registry-6b586f9694-8ns6k                   kube-system
	bfb31922272c5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             55 seconds ago       Running             coredns                                  0                   529e8cc60ef3c       coredns-66bc5c9577-dd8db                    kube-system
	875e77b7948ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             55 seconds ago       Running             storage-provisioner                      0                   818084b37bc78       storage-provisioner                         kube-system
	371ec5ccac551       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   77f155ba37ace       kube-proxy-952nl                            kube-system
	63d2fc63799c7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   d2964eaabd9f2       kindnet-chh44                               kube-system
	7c7aa4df8e12b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   21b89fefafe32       kube-controller-manager-addons-162665       kube-system
	4b7561783145a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   410373435ed89       kube-apiserver-addons-162665                kube-system
	ba7d02bd6b761       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   d3bcb0bdaaf12       kube-scheduler-addons-162665                kube-system
	a0d7b2076afe9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   a0763b46d9953       etcd-addons-162665                          kube-system
	
	
	==> coredns [bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb] <==
	[INFO] 10.244.0.17:45082 - 44928 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.006259537s
	[INFO] 10.244.0.17:41116 - 41034 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000076875s
	[INFO] 10.244.0.17:41116 - 40702 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000144141s
	[INFO] 10.244.0.17:47140 - 17664 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000067758s
	[INFO] 10.244.0.17:47140 - 17340 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.00009531s
	[INFO] 10.244.0.17:40270 - 35159 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000061621s
	[INFO] 10.244.0.17:40270 - 35435 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000096312s
	[INFO] 10.244.0.17:51788 - 36846 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000130322s
	[INFO] 10.244.0.17:51788 - 36356 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000176865s
	[INFO] 10.244.0.22:37428 - 1490 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000178481s
	[INFO] 10.244.0.22:51413 - 56818 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000269867s
	[INFO] 10.244.0.22:54203 - 11225 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134323s
	[INFO] 10.244.0.22:35455 - 24312 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000184193s
	[INFO] 10.244.0.22:59099 - 50669 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102545s
	[INFO] 10.244.0.22:49647 - 60947 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117802s
	[INFO] 10.244.0.22:36524 - 2015 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003219731s
	[INFO] 10.244.0.22:59819 - 35649 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004052374s
	[INFO] 10.244.0.22:54194 - 19984 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004710196s
	[INFO] 10.244.0.22:53097 - 55888 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.006304502s
	[INFO] 10.244.0.22:40575 - 41268 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006828671s
	[INFO] 10.244.0.22:42250 - 56787 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007615483s
	[INFO] 10.244.0.22:53693 - 8454 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004945003s
	[INFO] 10.244.0.22:37256 - 50028 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00523981s
	[INFO] 10.244.0.22:52193 - 242 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002444029s
	[INFO] 10.244.0.22:42223 - 43498 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002708791s
	
	
	==> describe nodes <==
	Name:               addons-162665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-162665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=addons-162665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T11_29_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-162665
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-162665"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 11:29:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-162665
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 11:31:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 11:31:20 +0000   Sat, 18 Oct 2025 11:29:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 11:31:20 +0000   Sat, 18 Oct 2025 11:29:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 11:31:20 +0000   Sat, 18 Oct 2025 11:29:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 11:31:20 +0000   Sat, 18 Oct 2025 11:30:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-162665
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                7f3dd06e-c800-4da1-b5f5-24431ef08e12
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-86bd5cbb97-rmg8m      0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  gadget                      gadget-vscpb                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  gcp-auth                    gcp-auth-78565c9fb4-kr9d8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-splxz    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         96s
	  kube-system                 amd-gpu-device-plugin-qtz57                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 coredns-66bc5c9577-dd8db                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     98s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 csi-hostpathplugin-vd8h9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 etcd-addons-162665                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         104s
	  kube-system                 kindnet-chh44                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      98s
	  kube-system                 kube-apiserver-addons-162665                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-controller-manager-addons-162665        200m (2%)     0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-952nl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-scheduler-addons-162665                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 metrics-server-85b7d694d7-4fbgz              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         96s
	  kube-system                 nvidia-device-plugin-daemonset-l95vf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 registry-6b586f9694-8ns6k                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 registry-creds-764b6fb674-hx56w              0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 registry-proxy-tsk7w                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 snapshot-controller-7d9fbc56b8-mhxbb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 snapshot-controller-7d9fbc56b8-q4cgf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  local-path-storage          local-path-provisioner-648f6765c9-mrfgl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-8jpkg               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 96s   kube-proxy       
	  Normal  Starting                 104s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s  kubelet          Node addons-162665 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s  kubelet          Node addons-162665 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s  kubelet          Node addons-162665 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           99s   node-controller  Node addons-162665 event: Registered Node addons-162665 in Controller
	  Normal  NodeReady                56s   kubelet          Node addons-162665 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 11:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001819] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002003] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.407420] i8042: Warning: Keylock active
	[  +0.009992] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003536] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001176] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000652] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000627] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000634] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000644] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516247] block sda: the capability attribute has been deprecated.
	[  +0.098201] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.055601] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.500112] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956] <==
	{"level":"warn","ts":"2025-10-18T11:29:45.180311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.186397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.192356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.198566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.204693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.211629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.218208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.225265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.232340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.246110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.253286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.259477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:45.311206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:56.547650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:29:56.553789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:30:22.710978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:30:22.738048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:30:43.197695Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.221783ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T11:30:43.198059Z","caller":"traceutil/trace.go:172","msg":"trace[1690408968] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:966; }","duration":"121.607288ms","start":"2025-10-18T11:30:43.076435Z","end":"2025-10-18T11:30:43.198043Z","steps":["trace[1690408968] 'range keys from in-memory index tree'  (duration: 121.142547ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T11:30:52.627498Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.779294ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040713988101891 > lease_revoke:<id:70cc99f7152c55bc>","response":"size:29"}
	{"level":"info","ts":"2025-10-18T11:30:57.347936Z","caller":"traceutil/trace.go:172","msg":"trace[1705074662] transaction","detail":"{read_only:false; response_revision:1047; number_of_response:1; }","duration":"171.263528ms","start":"2025-10-18T11:30:57.176652Z","end":"2025-10-18T11:30:57.347916Z","steps":["trace[1705074662] 'process raft request'  (duration: 144.517198ms)","trace[1705074662] 'compare'  (duration: 26.639881ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T11:30:57.347966Z","caller":"traceutil/trace.go:172","msg":"trace[1291800020] transaction","detail":"{read_only:false; response_revision:1048; number_of_response:1; }","duration":"167.803525ms","start":"2025-10-18T11:30:57.180147Z","end":"2025-10-18T11:30:57.347950Z","steps":["trace[1291800020] 'process raft request'  (duration: 167.755638ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T11:31:08.973952Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.342241ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T11:31:08.974016Z","caller":"traceutil/trace.go:172","msg":"trace[255875423] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1138; }","duration":"151.413196ms","start":"2025-10-18T11:31:08.822587Z","end":"2025-10-18T11:31:08.974000Z","steps":["trace[255875423] 'agreement among raft nodes before linearized reading'  (duration: 58.301056ms)","trace[255875423] 'range keys from in-memory index tree'  (duration: 93.008116ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T11:31:08.974109Z","caller":"traceutil/trace.go:172","msg":"trace[1475923364] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"149.520352ms","start":"2025-10-18T11:31:08.824573Z","end":"2025-10-18T11:31:08.974093Z","steps":["trace[1475923364] 'process raft request'  (duration: 56.359522ms)","trace[1475923364] 'compare'  (duration: 93.028727ms)"],"step_count":2}
	
	
	==> gcp-auth [d539fd7cbcbbe623dd11ed18b85907089bc31258e45ad6360d0dcb7f28bb0cb5] <==
	2025/10/18 11:31:12 GCP Auth Webhook started!
	2025/10/18 11:31:20 Ready to marshal response ...
	2025/10/18 11:31:20 Ready to write response ...
	2025/10/18 11:31:21 Ready to marshal response ...
	2025/10/18 11:31:21 Ready to write response ...
	2025/10/18 11:31:21 Ready to marshal response ...
	2025/10/18 11:31:21 Ready to write response ...
	
	
	==> kernel <==
	 11:31:31 up 13 min,  0 user,  load average: 2.13, 0.94, 0.36
	Linux addons-162665 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb] <==
	I1018 11:29:54.579343       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T11:29:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 11:29:54.875050       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 11:29:54.875098       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 11:29:54.875116       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 11:29:54.875522       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 11:30:24.789627       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 11:30:24.876242       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 11:30:24.879004       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 11:30:24.879019       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1018 11:30:26.276480       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 11:30:26.276508       1 metrics.go:72] Registering metrics
	I1018 11:30:26.276569       1 controller.go:711] "Syncing nftables rules"
	I1018 11:30:34.790918       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:30:34.790964       1 main.go:301] handling current node
	I1018 11:30:44.787990       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:30:44.788040       1 main.go:301] handling current node
	I1018 11:30:54.790016       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:30:54.790058       1 main.go:301] handling current node
	I1018 11:31:04.787557       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:31:04.787590       1 main.go:301] handling current node
	I1018 11:31:14.787964       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:31:14.787997       1 main.go:301] handling current node
	I1018 11:31:24.787861       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:31:24.787908       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92] <==
	W1018 11:29:56.553733       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1018 11:30:02.764268       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.108.24.1"}
	W1018 11:30:22.704475       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 11:30:22.710945       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 11:30:22.731505       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 11:30:22.737969       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 11:30:35.279995       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.24.1:443: connect: connection refused
	E1018 11:30:35.280062       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.24.1:443: connect: connection refused" logger="UnhandledError"
	W1018 11:30:35.280102       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.24.1:443: connect: connection refused
	E1018 11:30:35.280128       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.24.1:443: connect: connection refused" logger="UnhandledError"
	W1018 11:30:35.299474       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.24.1:443: connect: connection refused
	E1018 11:30:35.299512       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.24.1:443: connect: connection refused" logger="UnhandledError"
	W1018 11:30:35.300701       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.24.1:443: connect: connection refused
	E1018 11:30:35.300737       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.24.1:443: connect: connection refused" logger="UnhandledError"
	W1018 11:30:47.115978       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 11:30:47.116084       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 11:30:47.116135       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.71.1:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.71.1:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.71.1:443: connect: connection refused" logger="UnhandledError"
	E1018 11:30:47.117968       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.71.1:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.71.1:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.71.1:443: connect: connection refused" logger="UnhandledError"
	E1018 11:30:47.123476       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.71.1:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.71.1:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.71.1:443: connect: connection refused" logger="UnhandledError"
	I1018 11:30:47.179510       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 11:31:29.413107       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50808: use of closed network connection
	E1018 11:31:29.575189       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50842: use of closed network connection
	
	
	==> kube-controller-manager [7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660] <==
	I1018 11:29:52.690443       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 11:29:52.690459       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 11:29:52.690740       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 11:29:52.690753       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 11:29:52.694001       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 11:29:52.694067       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 11:29:52.694102       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 11:29:52.694109       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 11:29:52.694113       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 11:29:52.694243       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 11:29:52.697372       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 11:29:52.700112       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-162665" podCIDRs=["10.244.0.0/24"]
	I1018 11:29:52.704226       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 11:29:52.704248       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 11:29:52.704287       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 11:29:52.705451       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 11:29:52.712824       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 11:30:22.699196       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 11:30:22.699331       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 11:30:22.699376       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 11:30:22.721936       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 11:30:22.725540       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 11:30:22.800507       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 11:30:22.826160       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 11:30:37.656533       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a] <==
	I1018 11:29:54.372926       1 server_linux.go:53] "Using iptables proxy"
	I1018 11:29:54.484662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 11:29:54.585290       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 11:29:54.585359       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 11:29:54.591159       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 11:29:55.078207       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 11:29:55.078290       1 server_linux.go:132] "Using iptables Proxier"
	I1018 11:29:55.119783       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 11:29:55.130172       1 server.go:527] "Version info" version="v1.34.1"
	I1018 11:29:55.130484       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 11:29:55.133600       1 config.go:200] "Starting service config controller"
	I1018 11:29:55.134894       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 11:29:55.134139       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 11:29:55.135050       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 11:29:55.134590       1 config.go:309] "Starting node config controller"
	I1018 11:29:55.135130       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 11:29:55.135154       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 11:29:55.134131       1 config.go:106] "Starting endpoint slice config controller"
	I1018 11:29:55.135197       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 11:29:55.236096       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 11:29:55.236155       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 11:29:55.236502       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51] <==
	E1018 11:29:45.712709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 11:29:45.712729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 11:29:45.712825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 11:29:45.712851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 11:29:45.712896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 11:29:45.712919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 11:29:45.712925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 11:29:45.712973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 11:29:45.712512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 11:29:45.713099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 11:29:45.713205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 11:29:45.713243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 11:29:45.713254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 11:29:45.713254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 11:29:45.713313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 11:29:45.713315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 11:29:46.529864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 11:29:46.700575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 11:29:46.761603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 11:29:46.809654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 11:29:46.821912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 11:29:46.835420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 11:29:46.851019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 11:29:47.065253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1018 11:29:49.910751       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 11:31:02 addons-162665 kubelet[1309]: I1018 11:31:02.154613    1309 scope.go:117] "RemoveContainer" containerID="c80c89c84508ef730bfc20a1f0c90fba689e02c4c2831cf22900d196c231e835"
	Oct 18 11:31:02 addons-162665 kubelet[1309]: I1018 11:31:02.175192    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpath-resizer-0" podStartSLOduration=41.168743391 podStartE2EDuration="1m7.175178496s" podCreationTimestamp="2025-10-18 11:29:55 +0000 UTC" firstStartedPulling="2025-10-18 11:30:35.736931603 +0000 UTC m=+47.921708224" lastFinishedPulling="2025-10-18 11:31:01.743366723 +0000 UTC m=+73.928143329" observedRunningTime="2025-10-18 11:31:02.174460558 +0000 UTC m=+74.359237185" watchObservedRunningTime="2025-10-18 11:31:02.175178496 +0000 UTC m=+74.359955121"
	Oct 18 11:31:02 addons-162665 kubelet[1309]: I1018 11:31:02.289957    1309 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jgt6\" (UniqueName: \"kubernetes.io/projected/c5317d6e-7258-4ec5-9826-d1bba5249687-kube-api-access-6jgt6\") pod \"c5317d6e-7258-4ec5-9826-d1bba5249687\" (UID: \"c5317d6e-7258-4ec5-9826-d1bba5249687\") "
	Oct 18 11:31:02 addons-162665 kubelet[1309]: I1018 11:31:02.292143    1309 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5317d6e-7258-4ec5-9826-d1bba5249687-kube-api-access-6jgt6" (OuterVolumeSpecName: "kube-api-access-6jgt6") pod "c5317d6e-7258-4ec5-9826-d1bba5249687" (UID: "c5317d6e-7258-4ec5-9826-d1bba5249687"). InnerVolumeSpecName "kube-api-access-6jgt6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 18 11:31:02 addons-162665 kubelet[1309]: I1018 11:31:02.391305    1309 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6jgt6\" (UniqueName: \"kubernetes.io/projected/c5317d6e-7258-4ec5-9826-d1bba5249687-kube-api-access-6jgt6\") on node \"addons-162665\" DevicePath \"\""
	Oct 18 11:31:03 addons-162665 kubelet[1309]: I1018 11:31:03.162611    1309 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac6f7a535ea8455d1f09212152c5af5749eef0d071380cd2f499438f2461f558"
	Oct 18 11:31:03 addons-162665 kubelet[1309]: I1018 11:31:03.701562    1309 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8dq7\" (UniqueName: \"kubernetes.io/projected/ae2fdc0c-c739-43ec-a2bc-627e6151982a-kube-api-access-s8dq7\") pod \"ae2fdc0c-c739-43ec-a2bc-627e6151982a\" (UID: \"ae2fdc0c-c739-43ec-a2bc-627e6151982a\") "
	Oct 18 11:31:03 addons-162665 kubelet[1309]: I1018 11:31:03.703671    1309 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae2fdc0c-c739-43ec-a2bc-627e6151982a-kube-api-access-s8dq7" (OuterVolumeSpecName: "kube-api-access-s8dq7") pod "ae2fdc0c-c739-43ec-a2bc-627e6151982a" (UID: "ae2fdc0c-c739-43ec-a2bc-627e6151982a"). InnerVolumeSpecName "kube-api-access-s8dq7". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 18 11:31:03 addons-162665 kubelet[1309]: I1018 11:31:03.802694    1309 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s8dq7\" (UniqueName: \"kubernetes.io/projected/ae2fdc0c-c739-43ec-a2bc-627e6151982a-kube-api-access-s8dq7\") on node \"addons-162665\" DevicePath \"\""
	Oct 18 11:31:04 addons-162665 kubelet[1309]: I1018 11:31:04.170572    1309 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-tsk7w" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 11:31:04 addons-162665 kubelet[1309]: I1018 11:31:04.175243    1309 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8424c22315225291a0f79847edccb80328d988ec0b814124f26cf38903fcec2"
	Oct 18 11:31:04 addons-162665 kubelet[1309]: I1018 11:31:04.194148    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-tsk7w" podStartSLOduration=1.39145984 podStartE2EDuration="29.194126189s" podCreationTimestamp="2025-10-18 11:30:35 +0000 UTC" firstStartedPulling="2025-10-18 11:30:35.754124601 +0000 UTC m=+47.938901218" lastFinishedPulling="2025-10-18 11:31:03.556790942 +0000 UTC m=+75.741567567" observedRunningTime="2025-10-18 11:31:04.193100258 +0000 UTC m=+76.377876884" watchObservedRunningTime="2025-10-18 11:31:04.194126189 +0000 UTC m=+76.378902815"
	Oct 18 11:31:05 addons-162665 kubelet[1309]: I1018 11:31:05.178721    1309 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-tsk7w" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 11:31:07 addons-162665 kubelet[1309]: E1018 11:31:07.128607    1309 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 18 11:31:07 addons-162665 kubelet[1309]: E1018 11:31:07.128711    1309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b711b8e2-3d97-490b-bb1b-e5272a73c7bf-gcr-creds podName:b711b8e2-3d97-490b-bb1b-e5272a73c7bf nodeName:}" failed. No retries permitted until 2025-10-18 11:31:39.128688961 +0000 UTC m=+111.313465569 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/b711b8e2-3d97-490b-bb1b-e5272a73c7bf-gcr-creds") pod "registry-creds-764b6fb674-hx56w" (UID: "b711b8e2-3d97-490b-bb1b-e5272a73c7bf") : secret "registry-creds-gcr" not found
	Oct 18 11:31:08 addons-162665 kubelet[1309]: I1018 11:31:08.210893    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-splxz" podStartSLOduration=60.606874106 podStartE2EDuration="1m13.21087202s" podCreationTimestamp="2025-10-18 11:29:55 +0000 UTC" firstStartedPulling="2025-10-18 11:30:54.799774516 +0000 UTC m=+66.984551138" lastFinishedPulling="2025-10-18 11:31:07.403772426 +0000 UTC m=+79.588549052" observedRunningTime="2025-10-18 11:31:08.210223846 +0000 UTC m=+80.395000472" watchObservedRunningTime="2025-10-18 11:31:08.21087202 +0000 UTC m=+80.395648648"
	Oct 18 11:31:10 addons-162665 kubelet[1309]: I1018 11:31:10.212345    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-vscpb" podStartSLOduration=65.153158392 podStartE2EDuration="1m15.212323837s" podCreationTimestamp="2025-10-18 11:29:55 +0000 UTC" firstStartedPulling="2025-10-18 11:30:59.566726015 +0000 UTC m=+71.751502632" lastFinishedPulling="2025-10-18 11:31:09.625891455 +0000 UTC m=+81.810668077" observedRunningTime="2025-10-18 11:31:10.211789639 +0000 UTC m=+82.396566263" watchObservedRunningTime="2025-10-18 11:31:10.212323837 +0000 UTC m=+82.397100463"
	Oct 18 11:31:13 addons-162665 kubelet[1309]: I1018 11:31:13.951663    1309 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 18 11:31:13 addons-162665 kubelet[1309]: I1018 11:31:13.951712    1309 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 18 11:31:14 addons-162665 kubelet[1309]: I1018 11:31:14.633927    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-kr9d8" podStartSLOduration=68.12017172 podStartE2EDuration="1m12.633909506s" podCreationTimestamp="2025-10-18 11:30:02 +0000 UTC" firstStartedPulling="2025-10-18 11:31:07.450576161 +0000 UTC m=+79.635352783" lastFinishedPulling="2025-10-18 11:31:11.964313948 +0000 UTC m=+84.149090569" observedRunningTime="2025-10-18 11:31:12.229371403 +0000 UTC m=+84.414148030" watchObservedRunningTime="2025-10-18 11:31:14.633909506 +0000 UTC m=+86.818686132"
	Oct 18 11:31:16 addons-162665 kubelet[1309]: I1018 11:31:16.253669    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-vd8h9" podStartSLOduration=1.456435136 podStartE2EDuration="41.25365017s" podCreationTimestamp="2025-10-18 11:30:35 +0000 UTC" firstStartedPulling="2025-10-18 11:30:35.734021583 +0000 UTC m=+47.918798188" lastFinishedPulling="2025-10-18 11:31:15.531236601 +0000 UTC m=+87.716013222" observedRunningTime="2025-10-18 11:31:16.253337842 +0000 UTC m=+88.438114469" watchObservedRunningTime="2025-10-18 11:31:16.25365017 +0000 UTC m=+88.438426797"
	Oct 18 11:31:21 addons-162665 kubelet[1309]: I1018 11:31:21.333810    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkkrp\" (UniqueName: \"kubernetes.io/projected/63e62b2d-6b2a-4e68-be20-6ccd92ea0265-kube-api-access-rkkrp\") pod \"busybox\" (UID: \"63e62b2d-6b2a-4e68-be20-6ccd92ea0265\") " pod="default/busybox"
	Oct 18 11:31:21 addons-162665 kubelet[1309]: I1018 11:31:21.333854    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/63e62b2d-6b2a-4e68-be20-6ccd92ea0265-gcp-creds\") pod \"busybox\" (UID: \"63e62b2d-6b2a-4e68-be20-6ccd92ea0265\") " pod="default/busybox"
	Oct 18 11:31:23 addons-162665 kubelet[1309]: I1018 11:31:23.280727    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.053143766 podStartE2EDuration="2.280713129s" podCreationTimestamp="2025-10-18 11:31:21 +0000 UTC" firstStartedPulling="2025-10-18 11:31:21.59165479 +0000 UTC m=+93.776431395" lastFinishedPulling="2025-10-18 11:31:22.81922414 +0000 UTC m=+95.004000758" observedRunningTime="2025-10-18 11:31:23.279456855 +0000 UTC m=+95.464233481" watchObservedRunningTime="2025-10-18 11:31:23.280713129 +0000 UTC m=+95.465489754"
	Oct 18 11:31:29 addons-162665 kubelet[1309]: E1018 11:31:29.575089    1309 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57082->127.0.0.1:43013: write tcp 127.0.0.1:57082->127.0.0.1:43013: write: broken pipe
	
	
	==> storage-provisioner [875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3] <==
	W1018 11:31:05.869154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:07.874832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:07.883866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:09.886565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:09.894695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:11.898133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:11.903652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:13.906896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:13.910719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:15.913559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:15.917561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:17.920296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:17.924714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:19.928071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:19.933776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:21.937119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:21.940700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:23.943866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:23.947733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:25.951087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:25.956481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:27.959095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:27.962731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:29.965983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:31:29.971061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-162665 -n addons-162665
helpers_test.go:269: (dbg) Run:  kubectl --context addons-162665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-create-nbchn gcp-auth-certs-patch-nbpz5 ingress-nginx-admission-create-g2s9g ingress-nginx-admission-patch-d4dp5 registry-creds-764b6fb674-hx56w
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-162665 describe pod gcp-auth-certs-create-nbchn gcp-auth-certs-patch-nbpz5 ingress-nginx-admission-create-g2s9g ingress-nginx-admission-patch-d4dp5 registry-creds-764b6fb674-hx56w
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-162665 describe pod gcp-auth-certs-create-nbchn gcp-auth-certs-patch-nbpz5 ingress-nginx-admission-create-g2s9g ingress-nginx-admission-patch-d4dp5 registry-creds-764b6fb674-hx56w: exit status 1 (61.255223ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-nbchn" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-nbpz5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-g2s9g" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-d4dp5" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-hx56w" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-162665 describe pod gcp-auth-certs-create-nbchn gcp-auth-certs-patch-nbpz5 ingress-nginx-admission-create-g2s9g ingress-nginx-admission-patch-d4dp5 registry-creds-764b6fb674-hx56w: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162665 addons disable headlamp --alsologtostderr -v=1: exit status 11 (234.892113ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:31:32.136027   19895 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:31:32.136333   19895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:32.136344   19895 out.go:374] Setting ErrFile to fd 2...
	I1018 11:31:32.136348   19895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:32.136538   19895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:31:32.136821   19895 mustload.go:65] Loading cluster: addons-162665
	I1018 11:31:32.137164   19895 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:32.137186   19895 addons.go:606] checking whether the cluster is paused
	I1018 11:31:32.137265   19895 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:32.137277   19895 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:31:32.137661   19895 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:31:32.156413   19895 ssh_runner.go:195] Run: systemctl --version
	I1018 11:31:32.156469   19895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:31:32.174795   19895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:31:32.270488   19895 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:31:32.270582   19895 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:31:32.303912   19895 cri.go:89] found id: "488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb"
	I1018 11:31:32.303931   19895 cri.go:89] found id: "a27fdd7026b29e61c0f124b27104ae3956d2aed3110d7b720128e24c0bacc3ec"
	I1018 11:31:32.303935   19895 cri.go:89] found id: "e58b8a219585a9ae96320c366b4c98f0c48358d21f7fb35e348fe8139059d7f9"
	I1018 11:31:32.303939   19895 cri.go:89] found id: "80ee1a432463a8ad3a4376b1f75e176fb6b537149aba4f986e224a7a531ba2b2"
	I1018 11:31:32.303941   19895 cri.go:89] found id: "1c7e5acf2100a7ffae62817db39ede8773b2ec7154e1024f6df4324466851822"
	I1018 11:31:32.303944   19895 cri.go:89] found id: "43a9f95eacc8289c6670fc316e3fc920654dc66aa76a198761a35537e6e3fcec"
	I1018 11:31:32.303947   19895 cri.go:89] found id: "7f162f04036aaf527574c6ac01010e2f827379e18bdc4eaf890380403057279e"
	I1018 11:31:32.303949   19895 cri.go:89] found id: "763f4d62397d6dc0f6a5e51925ddb584fb44a3f2bbed9f528918681dbbd6bef6"
	I1018 11:31:32.303952   19895 cri.go:89] found id: "230e9f4fd374710bc4d70889f01e8c646dbdbed6fe4ac29102ad60f3e1d98d18"
	I1018 11:31:32.303957   19895 cri.go:89] found id: "98ea2b43ee1f985889b32bdfd540789b4f79b7b665ae12fba712166d9fdfd68d"
	I1018 11:31:32.303959   19895 cri.go:89] found id: "c47f2661c734239e8c50f4aef2752bc8c27db6601ea3f442780cbb96bf3187fb"
	I1018 11:31:32.303962   19895 cri.go:89] found id: "7da1e14278c12f7ddce8a0a0317a7585f16e6a2cb0718634ffd628e8b1564fb1"
	I1018 11:31:32.303965   19895 cri.go:89] found id: "03c9856418e49f86ce20ae3c9932b0f0698840f611145c58c7b2d8866d2f1045"
	I1018 11:31:32.303968   19895 cri.go:89] found id: "2d9dfc50ea0d72c6edb7aeb1f80d3aeffcb60ff1588c6aa44fc4a740c0513602"
	I1018 11:31:32.303971   19895 cri.go:89] found id: "f9c877c63013ceff8748532507dbd72e3fc595da82cbcf0558b11733e58c209b"
	I1018 11:31:32.303982   19895 cri.go:89] found id: "07d2ff78db059878fffc6c128c991fcaa07e358737321e30a7ca63865510b349"
	I1018 11:31:32.303989   19895 cri.go:89] found id: "bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb"
	I1018 11:31:32.303993   19895 cri.go:89] found id: "875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3"
	I1018 11:31:32.303996   19895 cri.go:89] found id: "371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a"
	I1018 11:31:32.303998   19895 cri.go:89] found id: "63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb"
	I1018 11:31:32.304001   19895 cri.go:89] found id: "7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660"
	I1018 11:31:32.304003   19895 cri.go:89] found id: "4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92"
	I1018 11:31:32.304005   19895 cri.go:89] found id: "ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51"
	I1018 11:31:32.304008   19895 cri.go:89] found id: "a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956"
	I1018 11:31:32.304010   19895 cri.go:89] found id: ""
	I1018 11:31:32.304050   19895 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 11:31:32.318204   19895 out.go:203] 
	W1018 11:31:32.319737   19895 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 11:31:32.319780   19895 out.go:285] * 
	* 
	W1018 11:31:32.322773   19895 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 11:31:32.324145   19895 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-162665 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.52s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-rmg8m" [b1a2d499-c478-4a68-a4d1-4256566f2858] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002495787s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162665 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (228.972979ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:31:55.567331   22420 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:31:55.567613   22420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:55.567623   22420 out.go:374] Setting ErrFile to fd 2...
	I1018 11:31:55.567627   22420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:55.567830   22420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:31:55.568099   22420 mustload.go:65] Loading cluster: addons-162665
	I1018 11:31:55.568408   22420 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:55.568426   22420 addons.go:606] checking whether the cluster is paused
	I1018 11:31:55.568499   22420 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:55.568510   22420 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:31:55.568865   22420 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:31:55.586179   22420 ssh_runner.go:195] Run: systemctl --version
	I1018 11:31:55.586240   22420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:31:55.603792   22420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:31:55.699288   22420 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:31:55.699371   22420 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:31:55.728939   22420 cri.go:89] found id: "ff53e54600e125a4c603286ddd3437b940e41d87e89c0a79234afde24316e759"
	I1018 11:31:55.728967   22420 cri.go:89] found id: "488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb"
	I1018 11:31:55.728971   22420 cri.go:89] found id: "a27fdd7026b29e61c0f124b27104ae3956d2aed3110d7b720128e24c0bacc3ec"
	I1018 11:31:55.728977   22420 cri.go:89] found id: "e58b8a219585a9ae96320c366b4c98f0c48358d21f7fb35e348fe8139059d7f9"
	I1018 11:31:55.728981   22420 cri.go:89] found id: "80ee1a432463a8ad3a4376b1f75e176fb6b537149aba4f986e224a7a531ba2b2"
	I1018 11:31:55.728987   22420 cri.go:89] found id: "1c7e5acf2100a7ffae62817db39ede8773b2ec7154e1024f6df4324466851822"
	I1018 11:31:55.728990   22420 cri.go:89] found id: "43a9f95eacc8289c6670fc316e3fc920654dc66aa76a198761a35537e6e3fcec"
	I1018 11:31:55.728995   22420 cri.go:89] found id: "7f162f04036aaf527574c6ac01010e2f827379e18bdc4eaf890380403057279e"
	I1018 11:31:55.728999   22420 cri.go:89] found id: "763f4d62397d6dc0f6a5e51925ddb584fb44a3f2bbed9f528918681dbbd6bef6"
	I1018 11:31:55.729017   22420 cri.go:89] found id: "230e9f4fd374710bc4d70889f01e8c646dbdbed6fe4ac29102ad60f3e1d98d18"
	I1018 11:31:55.729026   22420 cri.go:89] found id: "98ea2b43ee1f985889b32bdfd540789b4f79b7b665ae12fba712166d9fdfd68d"
	I1018 11:31:55.729030   22420 cri.go:89] found id: "c47f2661c734239e8c50f4aef2752bc8c27db6601ea3f442780cbb96bf3187fb"
	I1018 11:31:55.729038   22420 cri.go:89] found id: "7da1e14278c12f7ddce8a0a0317a7585f16e6a2cb0718634ffd628e8b1564fb1"
	I1018 11:31:55.729042   22420 cri.go:89] found id: "03c9856418e49f86ce20ae3c9932b0f0698840f611145c58c7b2d8866d2f1045"
	I1018 11:31:55.729046   22420 cri.go:89] found id: "2d9dfc50ea0d72c6edb7aeb1f80d3aeffcb60ff1588c6aa44fc4a740c0513602"
	I1018 11:31:55.729056   22420 cri.go:89] found id: "f9c877c63013ceff8748532507dbd72e3fc595da82cbcf0558b11733e58c209b"
	I1018 11:31:55.729061   22420 cri.go:89] found id: "07d2ff78db059878fffc6c128c991fcaa07e358737321e30a7ca63865510b349"
	I1018 11:31:55.729065   22420 cri.go:89] found id: "bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb"
	I1018 11:31:55.729068   22420 cri.go:89] found id: "875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3"
	I1018 11:31:55.729072   22420 cri.go:89] found id: "371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a"
	I1018 11:31:55.729075   22420 cri.go:89] found id: "63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb"
	I1018 11:31:55.729078   22420 cri.go:89] found id: "7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660"
	I1018 11:31:55.729080   22420 cri.go:89] found id: "4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92"
	I1018 11:31:55.729083   22420 cri.go:89] found id: "ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51"
	I1018 11:31:55.729085   22420 cri.go:89] found id: "a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956"
	I1018 11:31:55.729087   22420 cri.go:89] found id: ""
	I1018 11:31:55.729146   22420 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 11:31:55.744536   22420 out.go:203] 
	W1018 11:31:55.745982   22420 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 11:31:55.746000   22420 out.go:285] * 
	* 
	W1018 11:31:55.749111   22420 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 11:31:55.750795   22420 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-162665 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-162665 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-162665 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-162665 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [4f3b0056-fad8-4142-a1c9-f5c041b371ed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [4f3b0056-fad8-4142-a1c9-f5c041b371ed] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [4f3b0056-fad8-4142-a1c9-f5c041b371ed] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003787612s
addons_test.go:967: (dbg) Run:  kubectl --context addons-162665 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 ssh "cat /opt/local-path-provisioner/pvc-6d9219d2-3cde-4934-b9fc-1247e93a5f71_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-162665 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-162665 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162665 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (231.852105ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:31:56.917478   22650 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:31:56.917798   22650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:56.917810   22650 out.go:374] Setting ErrFile to fd 2...
	I1018 11:31:56.917814   22650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:56.918062   22650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:31:56.918493   22650 mustload.go:65] Loading cluster: addons-162665
	I1018 11:31:56.918925   22650 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:56.918950   22650 addons.go:606] checking whether the cluster is paused
	I1018 11:31:56.919086   22650 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:56.919104   22650 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:31:56.919542   22650 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:31:56.937692   22650 ssh_runner.go:195] Run: systemctl --version
	I1018 11:31:56.937755   22650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:31:56.956074   22650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:31:57.052341   22650 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:31:57.052418   22650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:31:57.081263   22650 cri.go:89] found id: "ff53e54600e125a4c603286ddd3437b940e41d87e89c0a79234afde24316e759"
	I1018 11:31:57.081288   22650 cri.go:89] found id: "488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb"
	I1018 11:31:57.081294   22650 cri.go:89] found id: "a27fdd7026b29e61c0f124b27104ae3956d2aed3110d7b720128e24c0bacc3ec"
	I1018 11:31:57.081299   22650 cri.go:89] found id: "e58b8a219585a9ae96320c366b4c98f0c48358d21f7fb35e348fe8139059d7f9"
	I1018 11:31:57.081302   22650 cri.go:89] found id: "80ee1a432463a8ad3a4376b1f75e176fb6b537149aba4f986e224a7a531ba2b2"
	I1018 11:31:57.081307   22650 cri.go:89] found id: "1c7e5acf2100a7ffae62817db39ede8773b2ec7154e1024f6df4324466851822"
	I1018 11:31:57.081311   22650 cri.go:89] found id: "43a9f95eacc8289c6670fc316e3fc920654dc66aa76a198761a35537e6e3fcec"
	I1018 11:31:57.081314   22650 cri.go:89] found id: "7f162f04036aaf527574c6ac01010e2f827379e18bdc4eaf890380403057279e"
	I1018 11:31:57.081318   22650 cri.go:89] found id: "763f4d62397d6dc0f6a5e51925ddb584fb44a3f2bbed9f528918681dbbd6bef6"
	I1018 11:31:57.081339   22650 cri.go:89] found id: "230e9f4fd374710bc4d70889f01e8c646dbdbed6fe4ac29102ad60f3e1d98d18"
	I1018 11:31:57.081345   22650 cri.go:89] found id: "98ea2b43ee1f985889b32bdfd540789b4f79b7b665ae12fba712166d9fdfd68d"
	I1018 11:31:57.081349   22650 cri.go:89] found id: "c47f2661c734239e8c50f4aef2752bc8c27db6601ea3f442780cbb96bf3187fb"
	I1018 11:31:57.081353   22650 cri.go:89] found id: "7da1e14278c12f7ddce8a0a0317a7585f16e6a2cb0718634ffd628e8b1564fb1"
	I1018 11:31:57.081357   22650 cri.go:89] found id: "03c9856418e49f86ce20ae3c9932b0f0698840f611145c58c7b2d8866d2f1045"
	I1018 11:31:57.081362   22650 cri.go:89] found id: "2d9dfc50ea0d72c6edb7aeb1f80d3aeffcb60ff1588c6aa44fc4a740c0513602"
	I1018 11:31:57.081373   22650 cri.go:89] found id: "f9c877c63013ceff8748532507dbd72e3fc595da82cbcf0558b11733e58c209b"
	I1018 11:31:57.081382   22650 cri.go:89] found id: "07d2ff78db059878fffc6c128c991fcaa07e358737321e30a7ca63865510b349"
	I1018 11:31:57.081399   22650 cri.go:89] found id: "bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb"
	I1018 11:31:57.081403   22650 cri.go:89] found id: "875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3"
	I1018 11:31:57.081406   22650 cri.go:89] found id: "371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a"
	I1018 11:31:57.081411   22650 cri.go:89] found id: "63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb"
	I1018 11:31:57.081415   22650 cri.go:89] found id: "7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660"
	I1018 11:31:57.081418   22650 cri.go:89] found id: "4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92"
	I1018 11:31:57.081423   22650 cri.go:89] found id: "ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51"
	I1018 11:31:57.081426   22650 cri.go:89] found id: "a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956"
	I1018 11:31:57.081430   22650 cri.go:89] found id: ""
	I1018 11:31:57.081478   22650 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 11:31:57.095856   22650 out.go:203] 
	W1018 11:31:57.097118   22650 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 11:31:57.097140   22650 out.go:285] * 
	* 
	W1018 11:31:57.100236   22650 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 11:31:57.101802   22650 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-162665 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.10s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-l95vf" [4c8e1e2a-6ab0-4cde-8847-b7cdf5b01ab4] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004152733s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162665 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (278.555014ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:31:45.063544   21705 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:31:45.064301   21705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:45.064320   21705 out.go:374] Setting ErrFile to fd 2...
	I1018 11:31:45.064329   21705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:45.064858   21705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:31:45.065255   21705 mustload.go:65] Loading cluster: addons-162665
	I1018 11:31:45.065690   21705 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:45.065713   21705 addons.go:606] checking whether the cluster is paused
	I1018 11:31:45.065847   21705 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:45.065867   21705 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:31:45.066394   21705 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:31:45.090011   21705 ssh_runner.go:195] Run: systemctl --version
	I1018 11:31:45.090099   21705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:31:45.114717   21705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:31:45.221813   21705 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:31:45.221909   21705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:31:45.254364   21705 cri.go:89] found id: "ff53e54600e125a4c603286ddd3437b940e41d87e89c0a79234afde24316e759"
	I1018 11:31:45.254387   21705 cri.go:89] found id: "488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb"
	I1018 11:31:45.254394   21705 cri.go:89] found id: "a27fdd7026b29e61c0f124b27104ae3956d2aed3110d7b720128e24c0bacc3ec"
	I1018 11:31:45.254398   21705 cri.go:89] found id: "e58b8a219585a9ae96320c366b4c98f0c48358d21f7fb35e348fe8139059d7f9"
	I1018 11:31:45.254402   21705 cri.go:89] found id: "80ee1a432463a8ad3a4376b1f75e176fb6b537149aba4f986e224a7a531ba2b2"
	I1018 11:31:45.254407   21705 cri.go:89] found id: "1c7e5acf2100a7ffae62817db39ede8773b2ec7154e1024f6df4324466851822"
	I1018 11:31:45.254411   21705 cri.go:89] found id: "43a9f95eacc8289c6670fc316e3fc920654dc66aa76a198761a35537e6e3fcec"
	I1018 11:31:45.254414   21705 cri.go:89] found id: "7f162f04036aaf527574c6ac01010e2f827379e18bdc4eaf890380403057279e"
	I1018 11:31:45.254418   21705 cri.go:89] found id: "763f4d62397d6dc0f6a5e51925ddb584fb44a3f2bbed9f528918681dbbd6bef6"
	I1018 11:31:45.254424   21705 cri.go:89] found id: "230e9f4fd374710bc4d70889f01e8c646dbdbed6fe4ac29102ad60f3e1d98d18"
	I1018 11:31:45.254428   21705 cri.go:89] found id: "98ea2b43ee1f985889b32bdfd540789b4f79b7b665ae12fba712166d9fdfd68d"
	I1018 11:31:45.254432   21705 cri.go:89] found id: "c47f2661c734239e8c50f4aef2752bc8c27db6601ea3f442780cbb96bf3187fb"
	I1018 11:31:45.254436   21705 cri.go:89] found id: "7da1e14278c12f7ddce8a0a0317a7585f16e6a2cb0718634ffd628e8b1564fb1"
	I1018 11:31:45.254440   21705 cri.go:89] found id: "03c9856418e49f86ce20ae3c9932b0f0698840f611145c58c7b2d8866d2f1045"
	I1018 11:31:45.254444   21705 cri.go:89] found id: "2d9dfc50ea0d72c6edb7aeb1f80d3aeffcb60ff1588c6aa44fc4a740c0513602"
	I1018 11:31:45.254471   21705 cri.go:89] found id: "f9c877c63013ceff8748532507dbd72e3fc595da82cbcf0558b11733e58c209b"
	I1018 11:31:45.254483   21705 cri.go:89] found id: "07d2ff78db059878fffc6c128c991fcaa07e358737321e30a7ca63865510b349"
	I1018 11:31:45.254489   21705 cri.go:89] found id: "bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb"
	I1018 11:31:45.254493   21705 cri.go:89] found id: "875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3"
	I1018 11:31:45.254497   21705 cri.go:89] found id: "371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a"
	I1018 11:31:45.254501   21705 cri.go:89] found id: "63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb"
	I1018 11:31:45.254505   21705 cri.go:89] found id: "7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660"
	I1018 11:31:45.254509   21705 cri.go:89] found id: "4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92"
	I1018 11:31:45.254513   21705 cri.go:89] found id: "ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51"
	I1018 11:31:45.254517   21705 cri.go:89] found id: "a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956"
	I1018 11:31:45.254529   21705 cri.go:89] found id: ""
	I1018 11:31:45.254586   21705 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 11:31:45.271956   21705 out.go:203] 
	W1018 11:31:45.273421   21705 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 11:31:45.273444   21705 out.go:285] * 
	* 
	W1018 11:31:45.278828   21705 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 11:31:45.280142   21705 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-162665 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.28s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-8jpkg" [4962bda8-6ffd-40ea-9239-e813451be3ae] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003356034s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162665 addons disable yakd --alsologtostderr -v=1: exit status 11 (229.323186ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:31:50.332174   22127 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:31:50.332461   22127 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:50.332472   22127 out.go:374] Setting ErrFile to fd 2...
	I1018 11:31:50.332476   22127 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:50.332773   22127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:31:50.333203   22127 mustload.go:65] Loading cluster: addons-162665
	I1018 11:31:50.333737   22127 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:50.333778   22127 addons.go:606] checking whether the cluster is paused
	I1018 11:31:50.333903   22127 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:50.333920   22127 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:31:50.334368   22127 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:31:50.352022   22127 ssh_runner.go:195] Run: systemctl --version
	I1018 11:31:50.352067   22127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:31:50.369247   22127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:31:50.465506   22127 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:31:50.465614   22127 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:31:50.495379   22127 cri.go:89] found id: "ff53e54600e125a4c603286ddd3437b940e41d87e89c0a79234afde24316e759"
	I1018 11:31:50.495398   22127 cri.go:89] found id: "488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb"
	I1018 11:31:50.495402   22127 cri.go:89] found id: "a27fdd7026b29e61c0f124b27104ae3956d2aed3110d7b720128e24c0bacc3ec"
	I1018 11:31:50.495405   22127 cri.go:89] found id: "e58b8a219585a9ae96320c366b4c98f0c48358d21f7fb35e348fe8139059d7f9"
	I1018 11:31:50.495408   22127 cri.go:89] found id: "80ee1a432463a8ad3a4376b1f75e176fb6b537149aba4f986e224a7a531ba2b2"
	I1018 11:31:50.495411   22127 cri.go:89] found id: "1c7e5acf2100a7ffae62817db39ede8773b2ec7154e1024f6df4324466851822"
	I1018 11:31:50.495413   22127 cri.go:89] found id: "43a9f95eacc8289c6670fc316e3fc920654dc66aa76a198761a35537e6e3fcec"
	I1018 11:31:50.495416   22127 cri.go:89] found id: "7f162f04036aaf527574c6ac01010e2f827379e18bdc4eaf890380403057279e"
	I1018 11:31:50.495418   22127 cri.go:89] found id: "763f4d62397d6dc0f6a5e51925ddb584fb44a3f2bbed9f528918681dbbd6bef6"
	I1018 11:31:50.495422   22127 cri.go:89] found id: "230e9f4fd374710bc4d70889f01e8c646dbdbed6fe4ac29102ad60f3e1d98d18"
	I1018 11:31:50.495424   22127 cri.go:89] found id: "98ea2b43ee1f985889b32bdfd540789b4f79b7b665ae12fba712166d9fdfd68d"
	I1018 11:31:50.495427   22127 cri.go:89] found id: "c47f2661c734239e8c50f4aef2752bc8c27db6601ea3f442780cbb96bf3187fb"
	I1018 11:31:50.495429   22127 cri.go:89] found id: "7da1e14278c12f7ddce8a0a0317a7585f16e6a2cb0718634ffd628e8b1564fb1"
	I1018 11:31:50.495432   22127 cri.go:89] found id: "03c9856418e49f86ce20ae3c9932b0f0698840f611145c58c7b2d8866d2f1045"
	I1018 11:31:50.495434   22127 cri.go:89] found id: "2d9dfc50ea0d72c6edb7aeb1f80d3aeffcb60ff1588c6aa44fc4a740c0513602"
	I1018 11:31:50.495447   22127 cri.go:89] found id: "f9c877c63013ceff8748532507dbd72e3fc595da82cbcf0558b11733e58c209b"
	I1018 11:31:50.495454   22127 cri.go:89] found id: "07d2ff78db059878fffc6c128c991fcaa07e358737321e30a7ca63865510b349"
	I1018 11:31:50.495460   22127 cri.go:89] found id: "bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb"
	I1018 11:31:50.495464   22127 cri.go:89] found id: "875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3"
	I1018 11:31:50.495468   22127 cri.go:89] found id: "371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a"
	I1018 11:31:50.495471   22127 cri.go:89] found id: "63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb"
	I1018 11:31:50.495475   22127 cri.go:89] found id: "7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660"
	I1018 11:31:50.495479   22127 cri.go:89] found id: "4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92"
	I1018 11:31:50.495482   22127 cri.go:89] found id: "ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51"
	I1018 11:31:50.495486   22127 cri.go:89] found id: "a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956"
	I1018 11:31:50.495490   22127 cri.go:89] found id: ""
	I1018 11:31:50.495544   22127 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 11:31:50.509585   22127 out.go:203] 
	W1018 11:31:50.510883   22127 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 11:31:50.510903   22127 out.go:285] * 
	* 
	W1018 11:31:50.513917   22127 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 11:31:50.515319   22127 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-162665 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.23s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.23s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-qtz57" [7718c757-52e9-4c21-8387-b22e46dbd672] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003082881s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-162665 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162665 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (229.108694ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:31:48.818823   21934 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:31:48.819110   21934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:48.819118   21934 out.go:374] Setting ErrFile to fd 2...
	I1018 11:31:48.819123   21934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:31:48.819308   21934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:31:48.819575   21934 mustload.go:65] Loading cluster: addons-162665
	I1018 11:31:48.819915   21934 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:48.819934   21934 addons.go:606] checking whether the cluster is paused
	I1018 11:31:48.820013   21934 config.go:182] Loaded profile config "addons-162665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:31:48.820030   21934 host.go:66] Checking if "addons-162665" exists ...
	I1018 11:31:48.820391   21934 cli_runner.go:164] Run: docker container inspect addons-162665 --format={{.State.Status}}
	I1018 11:31:48.838860   21934 ssh_runner.go:195] Run: systemctl --version
	I1018 11:31:48.838915   21934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162665
	I1018 11:31:48.856860   21934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/addons-162665/id_rsa Username:docker}
	I1018 11:31:48.952251   21934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 11:31:48.952336   21934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 11:31:48.981022   21934 cri.go:89] found id: "ff53e54600e125a4c603286ddd3437b940e41d87e89c0a79234afde24316e759"
	I1018 11:31:48.981045   21934 cri.go:89] found id: "488c15000b9785b188e1e54dbedea81958e1071fadb1073702281e17d4d1f0cb"
	I1018 11:31:48.981049   21934 cri.go:89] found id: "a27fdd7026b29e61c0f124b27104ae3956d2aed3110d7b720128e24c0bacc3ec"
	I1018 11:31:48.981051   21934 cri.go:89] found id: "e58b8a219585a9ae96320c366b4c98f0c48358d21f7fb35e348fe8139059d7f9"
	I1018 11:31:48.981054   21934 cri.go:89] found id: "80ee1a432463a8ad3a4376b1f75e176fb6b537149aba4f986e224a7a531ba2b2"
	I1018 11:31:48.981057   21934 cri.go:89] found id: "1c7e5acf2100a7ffae62817db39ede8773b2ec7154e1024f6df4324466851822"
	I1018 11:31:48.981059   21934 cri.go:89] found id: "43a9f95eacc8289c6670fc316e3fc920654dc66aa76a198761a35537e6e3fcec"
	I1018 11:31:48.981062   21934 cri.go:89] found id: "7f162f04036aaf527574c6ac01010e2f827379e18bdc4eaf890380403057279e"
	I1018 11:31:48.981066   21934 cri.go:89] found id: "763f4d62397d6dc0f6a5e51925ddb584fb44a3f2bbed9f528918681dbbd6bef6"
	I1018 11:31:48.981075   21934 cri.go:89] found id: "230e9f4fd374710bc4d70889f01e8c646dbdbed6fe4ac29102ad60f3e1d98d18"
	I1018 11:31:48.981079   21934 cri.go:89] found id: "98ea2b43ee1f985889b32bdfd540789b4f79b7b665ae12fba712166d9fdfd68d"
	I1018 11:31:48.981081   21934 cri.go:89] found id: "c47f2661c734239e8c50f4aef2752bc8c27db6601ea3f442780cbb96bf3187fb"
	I1018 11:31:48.981084   21934 cri.go:89] found id: "7da1e14278c12f7ddce8a0a0317a7585f16e6a2cb0718634ffd628e8b1564fb1"
	I1018 11:31:48.981086   21934 cri.go:89] found id: "03c9856418e49f86ce20ae3c9932b0f0698840f611145c58c7b2d8866d2f1045"
	I1018 11:31:48.981089   21934 cri.go:89] found id: "2d9dfc50ea0d72c6edb7aeb1f80d3aeffcb60ff1588c6aa44fc4a740c0513602"
	I1018 11:31:48.981092   21934 cri.go:89] found id: "f9c877c63013ceff8748532507dbd72e3fc595da82cbcf0558b11733e58c209b"
	I1018 11:31:48.981095   21934 cri.go:89] found id: "07d2ff78db059878fffc6c128c991fcaa07e358737321e30a7ca63865510b349"
	I1018 11:31:48.981099   21934 cri.go:89] found id: "bfb31922272c5600a6afc2b074a98a2f9fee0505fab2e0099c7adce8eeb709fb"
	I1018 11:31:48.981102   21934 cri.go:89] found id: "875e77b7948eab80aa9b4471222daf7bc509923cea2c2a3287b5c68935c922b3"
	I1018 11:31:48.981104   21934 cri.go:89] found id: "371ec5ccac5511f8b51c3cc5a3f9e28f08ab30cc5ce39d314c58dca80a4f2f7a"
	I1018 11:31:48.981106   21934 cri.go:89] found id: "63d2fc63799c7eba62027d2b13f718aea0b0ade7199b414f8d942267b8d686bb"
	I1018 11:31:48.981109   21934 cri.go:89] found id: "7c7aa4df8e12bc03678d8ea7fa448c2903d32fa1c9e81542971c56fc04834660"
	I1018 11:31:48.981112   21934 cri.go:89] found id: "4b7561783145a3f47ae466aa376af5f8b217d771c3af0b6e3f68ed20f952be92"
	I1018 11:31:48.981114   21934 cri.go:89] found id: "ba7d02bd6b76149d2dffe57df548f0b827ec1202b266979b9ed75b54e5542e51"
	I1018 11:31:48.981117   21934 cri.go:89] found id: "a0d7b2076afe90967519b1b47e6b6bcb9248af263a4f3235df4b14b1272a8956"
	I1018 11:31:48.981119   21934 cri.go:89] found id: ""
	I1018 11:31:48.981159   21934 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 11:31:48.995424   21934 out.go:203] 
	W1018 11:31:48.996665   21934 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:31:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 11:31:48.996683   21934 out.go:285] * 
	* 
	W1018 11:31:48.999658   21934 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 11:31:49.001261   21934 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-162665 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-874021 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-874021 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-dtkf8" [c6f84277-bbe7-4694-ac1f-baa1ed1e1561] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-874021 -n functional-874021
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-18 11:47:55.54278168 +0000 UTC m=+1138.063292665
functional_test.go:1645: (dbg) Run:  kubectl --context functional-874021 describe po hello-node-connect-7d85dfc575-dtkf8 -n default
functional_test.go:1645: (dbg) kubectl --context functional-874021 describe po hello-node-connect-7d85dfc575-dtkf8 -n default:
Name:             hello-node-connect-7d85dfc575-dtkf8
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-874021/192.168.49.2
Start Time:       Sat, 18 Oct 2025 11:37:55 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfwln (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tfwln:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dtkf8 to functional-874021
Normal   Pulling    6m59s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m59s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m59s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m53s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m53s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-874021 logs hello-node-connect-7d85dfc575-dtkf8 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-874021 logs hello-node-connect-7d85dfc575-dtkf8 -n default: exit status 1 (60.531679ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-dtkf8" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-874021 logs hello-node-connect-7d85dfc575-dtkf8 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-874021 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-dtkf8
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-874021/192.168.49.2
Start Time:       Sat, 18 Oct 2025 11:37:55 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfwln (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tfwln:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dtkf8 to functional-874021
Normal   Pulling    6m59s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m59s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m59s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m53s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m53s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-874021 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-874021 logs -l app=hello-node-connect: exit status 1 (62.009689ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-dtkf8" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-874021 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-874021 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.98.211.138
IPs:                      10.98.211.138
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30193/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-874021
helpers_test.go:243: (dbg) docker inspect functional-874021:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d14f895cf7f1ec50aa9a59e869a84967caed96fe94a405b445c7572aab2c0c2",
	        "Created": "2025-10-18T11:35:09.369598117Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33016,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T11:35:09.402199875Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/8d14f895cf7f1ec50aa9a59e869a84967caed96fe94a405b445c7572aab2c0c2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d14f895cf7f1ec50aa9a59e869a84967caed96fe94a405b445c7572aab2c0c2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d14f895cf7f1ec50aa9a59e869a84967caed96fe94a405b445c7572aab2c0c2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d14f895cf7f1ec50aa9a59e869a84967caed96fe94a405b445c7572aab2c0c2/8d14f895cf7f1ec50aa9a59e869a84967caed96fe94a405b445c7572aab2c0c2-json.log",
	        "Name": "/functional-874021",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-874021:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-874021",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d14f895cf7f1ec50aa9a59e869a84967caed96fe94a405b445c7572aab2c0c2",
	                "LowerDir": "/var/lib/docker/overlay2/fe519f170ff438b88f03c807e163031c17ec2e2766f0437f17735eae65a1618b-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe519f170ff438b88f03c807e163031c17ec2e2766f0437f17735eae65a1618b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe519f170ff438b88f03c807e163031c17ec2e2766f0437f17735eae65a1618b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe519f170ff438b88f03c807e163031c17ec2e2766f0437f17735eae65a1618b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-874021",
	                "Source": "/var/lib/docker/volumes/functional-874021/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-874021",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-874021",
	                "name.minikube.sigs.k8s.io": "functional-874021",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dd9106313eabbfaf420ddeae7a27ad19a57d3b04a1cdc6998fd544c27f8ddf2d",
	            "SandboxKey": "/var/run/docker/netns/dd9106313eab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-874021": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:9d:49:e8:12:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1dd4bf2f84f7e33b6016b1d790629a8177f05604ff062f7e27989fd4ab9ca3f1",
	                    "EndpointID": "7cadb38804f640ee3af6ec80931ea1cff16b88476f76c2d616a844b4b3b6c8e0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-874021",
	                        "8d14f895cf7f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-874021 -n functional-874021
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-874021 logs -n 25: (1.239699624s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                   ARGS                                                    │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start          │ -p functional-874021 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio           │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │                     │
	│ start          │ -p functional-874021 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │                     │
	│ ssh            │ functional-874021 ssh sudo cat /etc/ssl/certs/9360.pem                                                    │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ ssh            │ functional-874021 ssh sudo cat /usr/share/ca-certificates/9360.pem                                        │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ ssh            │ functional-874021 ssh sudo cat /etc/ssl/certs/51391683.0                                                  │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ ssh            │ functional-874021 ssh sudo cat /etc/ssl/certs/93602.pem                                                   │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ ssh            │ functional-874021 ssh sudo cat /usr/share/ca-certificates/93602.pem                                       │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ ssh            │ functional-874021 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                  │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ dashboard      │ --url --port 36195 -p functional-874021 --alsologtostderr -v=1                                            │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ ssh            │ functional-874021 ssh sudo cat /etc/test/nested/copy/9360/hosts                                           │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ image          │ functional-874021 image ls --format short --alsologtostderr                                               │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ image          │ functional-874021 image ls --format json --alsologtostderr                                                │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ image          │ functional-874021 image ls --format table --alsologtostderr                                               │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ image          │ functional-874021 image ls --format yaml --alsologtostderr                                                │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ ssh            │ functional-874021 ssh pgrep buildkitd                                                                     │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │                     │
	│ image          │ functional-874021 image build -t localhost/my-image:functional-874021 testdata/build --alsologtostderr    │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ update-context │ functional-874021 update-context --alsologtostderr -v=2                                                   │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ update-context │ functional-874021 update-context --alsologtostderr -v=2                                                   │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ update-context │ functional-874021 update-context --alsologtostderr -v=2                                                   │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ image          │ functional-874021 image ls                                                                                │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:38 UTC │ 18 Oct 25 11:38 UTC │
	│ service        │ functional-874021 service list                                                                            │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:47 UTC │ 18 Oct 25 11:47 UTC │
	│ service        │ functional-874021 service list -o json                                                                    │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:47 UTC │ 18 Oct 25 11:47 UTC │
	│ service        │ functional-874021 service --namespace=default --https --url hello-node                                    │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:47 UTC │                     │
	│ service        │ functional-874021 service hello-node --url --format={{.IP}}                                               │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:47 UTC │                     │
	│ service        │ functional-874021 service hello-node --url                                                                │ functional-874021 │ jenkins │ v1.37.0 │ 18 Oct 25 11:47 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 11:38:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 11:38:08.809978   47729 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:38:08.810113   47729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:38:08.810122   47729 out.go:374] Setting ErrFile to fd 2...
	I1018 11:38:08.810126   47729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:38:08.810441   47729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:38:08.810902   47729 out.go:368] Setting JSON to false
	I1018 11:38:08.811937   47729 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1237,"bootTime":1760786252,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 11:38:08.812027   47729 start.go:141] virtualization: kvm guest
	I1018 11:38:08.813725   47729 out.go:179] * [functional-874021] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1018 11:38:08.815404   47729 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 11:38:08.815470   47729 notify.go:220] Checking for updates...
	I1018 11:38:08.817577   47729 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:38:08.818677   47729 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 11:38:08.819892   47729 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 11:38:08.823367   47729 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 11:38:08.824678   47729 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 11:38:08.826553   47729 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:38:08.827082   47729 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:38:08.850363   47729 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 11:38:08.850444   47729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:38:08.905514   47729 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-18 11:38:08.895667342 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 11:38:08.905622   47729 docker.go:318] overlay module found
	I1018 11:38:08.907878   47729 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1018 11:38:08.908967   47729 start.go:305] selected driver: docker
	I1018 11:38:08.908985   47729 start.go:925] validating driver "docker" against &{Name:functional-874021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-874021 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 11:38:08.909089   47729 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 11:38:08.910697   47729 out.go:203] 
	W1018 11:38:08.911902   47729 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 11:38:08.912846   47729 out.go:203] 
	
	
	==> CRI-O <==
	Oct 18 11:38:14 functional-874021 crio[3587]: time="2025-10-18T11:38:14.409309502Z" level=info msg="Started container" PID=7584 containerID=2c0774bb10721c2d86b0e7d1895050debf40f5e94ac2f017709e15e12bedc463 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4mmtt/kubernetes-dashboard id=66b3123c-0fd5-4aa5-af01-8ad79f517fa1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=215da0e5b60062e6983ba815137c19d60282d00159005241ec4b3182ba879eaa
	Oct 18 11:38:15 functional-874021 crio[3587]: time="2025-10-18T11:38:15.14255405Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a" id=49657375-2898-42ae-9544-95c3fa6c538e name=/runtime.v1.ImageService/PullImage
	Oct 18 11:38:15 functional-874021 crio[3587]: time="2025-10-18T11:38:15.143245287Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=01a5d925-6a6e-45c8-a1b3-98ddfbe0468b name=/runtime.v1.ImageService/ImageStatus
	Oct 18 11:38:15 functional-874021 crio[3587]: time="2025-10-18T11:38:15.145108278Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=2a030f51-c81d-469c-ba72-68692cf7c7fc name=/runtime.v1.ImageService/ImageStatus
	Oct 18 11:38:15 functional-874021 crio[3587]: time="2025-10-18T11:38:15.149454414Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-gzh69/dashboard-metrics-scraper" id=31b7f910-dcd0-4afc-a79f-2714daa6d3d7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 11:38:15 functional-874021 crio[3587]: time="2025-10-18T11:38:15.150135846Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 11:38:15 functional-874021 crio[3587]: time="2025-10-18T11:38:15.155042055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 11:38:15 functional-874021 crio[3587]: time="2025-10-18T11:38:15.155215617Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5c3e89c3a825dabb464808c06e535b26cebbd6bcaa94d2cf7062970b12bcb002/merged/etc/group: no such file or directory"
	Oct 18 11:38:15 functional-874021 crio[3587]: time="2025-10-18T11:38:15.155568107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 11:38:15 functional-874021 crio[3587]: time="2025-10-18T11:38:15.184956903Z" level=info msg="Created container 8970ed5b9ac6d6bb3c1fb1cb910399d88567bab903827047f721a9227021e982: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-gzh69/dashboard-metrics-scraper" id=31b7f910-dcd0-4afc-a79f-2714daa6d3d7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 11:38:15 functional-874021 crio[3587]: time="2025-10-18T11:38:15.185740607Z" level=info msg="Starting container: 8970ed5b9ac6d6bb3c1fb1cb910399d88567bab903827047f721a9227021e982" id=dc91de9a-25a2-42f5-b15d-37a950046981 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 11:38:15 functional-874021 crio[3587]: time="2025-10-18T11:38:15.187771975Z" level=info msg="Started container" PID=7700 containerID=8970ed5b9ac6d6bb3c1fb1cb910399d88567bab903827047f721a9227021e982 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-gzh69/dashboard-metrics-scraper id=dc91de9a-25a2-42f5-b15d-37a950046981 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d82beb81a8227efddd9a1e6a41f43c425c47f67d78611c6a7b2e712257a990a
	Oct 18 11:38:28 functional-874021 crio[3587]: time="2025-10-18T11:38:28.224316663Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=754d0283-21ae-4fbf-80c6-da88875f6665 name=/runtime.v1.ImageService/PullImage
	Oct 18 11:38:37 functional-874021 crio[3587]: time="2025-10-18T11:38:37.222947675Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cb167cd4-4d77-48f6-8707-2bb5c1216066 name=/runtime.v1.ImageService/PullImage
	Oct 18 11:38:48 functional-874021 crio[3587]: time="2025-10-18T11:38:48.220235181Z" level=info msg="Stopping pod sandbox: 33fa11e17a24ccdf9747b36b32a0e8c7da71796c69769d2e8a92a87f0bf0ca20" id=08ddfa94-4680-4f7a-bb5e-6e5bedfeb607 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 11:38:48 functional-874021 crio[3587]: time="2025-10-18T11:38:48.220289518Z" level=info msg="Stopped pod sandbox (already stopped): 33fa11e17a24ccdf9747b36b32a0e8c7da71796c69769d2e8a92a87f0bf0ca20" id=08ddfa94-4680-4f7a-bb5e-6e5bedfeb607 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 11:38:48 functional-874021 crio[3587]: time="2025-10-18T11:38:48.220586197Z" level=info msg="Removing pod sandbox: 33fa11e17a24ccdf9747b36b32a0e8c7da71796c69769d2e8a92a87f0bf0ca20" id=0f3ce924-437c-4105-b6ac-99587fdf4b00 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 11:38:48 functional-874021 crio[3587]: time="2025-10-18T11:38:48.224055499Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 11:38:48 functional-874021 crio[3587]: time="2025-10-18T11:38:48.224130292Z" level=info msg="Removed pod sandbox: 33fa11e17a24ccdf9747b36b32a0e8c7da71796c69769d2e8a92a87f0bf0ca20" id=0f3ce924-437c-4105-b6ac-99587fdf4b00 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 11:39:19 functional-874021 crio[3587]: time="2025-10-18T11:39:19.223155438Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e89a3e6d-ae12-4129-806e-1dbdfaf4d7e8 name=/runtime.v1.ImageService/PullImage
	Oct 18 11:39:30 functional-874021 crio[3587]: time="2025-10-18T11:39:30.222931923Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7714e7a2-557b-4104-bc92-b2ad8a1862aa name=/runtime.v1.ImageService/PullImage
	Oct 18 11:40:49 functional-874021 crio[3587]: time="2025-10-18T11:40:49.223114224Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=abc71782-fad5-43a4-a01b-b3bafbd70531 name=/runtime.v1.ImageService/PullImage
	Oct 18 11:40:56 functional-874021 crio[3587]: time="2025-10-18T11:40:56.223393055Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8a03639d-4865-410f-884f-c8b60a20f77b name=/runtime.v1.ImageService/PullImage
	Oct 18 11:43:39 functional-874021 crio[3587]: time="2025-10-18T11:43:39.222639981Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=54355197-8b5a-40b8-9252-8dce374d66f0 name=/runtime.v1.ImageService/PullImage
	Oct 18 11:43:42 functional-874021 crio[3587]: time="2025-10-18T11:43:42.222879201Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3f675623-d687-41d8-bc57-18bf40442041 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8970ed5b9ac6d       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   7d82beb81a822       dashboard-metrics-scraper-77bf4d6c4c-gzh69   kubernetes-dashboard
	2c0774bb10721       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   215da0e5b6006       kubernetes-dashboard-855c9754f9-4mmtt        kubernetes-dashboard
	cd4b0ca5c0db1       docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115                  9 minutes ago       Running             myfrontend                  0                   890e4e7acc9fb       sp-pod                                       default
	05450761c402b       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   ef5a5d07db742       busybox-mount                                default
	0eabeb9374053       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                  10 minutes ago      Running             nginx                       0                   f7fd7e07d9bb4       nginx-svc                                    default
	69b48f899f701       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   91b6b255d1645       mysql-5bb876957f-r7f94                       default
	d5d280039c5c0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   a049bdb96b73a       storage-provisioner                          kube-system
	e00c44b079e7f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   548448509f11a       kube-apiserver-functional-874021             kube-system
	76d6899ab4ea5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   efa3d583e7e61       kube-scheduler-functional-874021             kube-system
	1f08b10f969b6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     1                   9ec057b48b235       kube-controller-manager-functional-874021    kube-system
	4c499f7e93780       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   d4edc6fea1a1f       etcd-functional-874021                       kube-system
	0603019cb4df9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   467b9b337b158       coredns-66bc5c9577-p482f                     kube-system
	e8f6184ecbd4d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   a049bdb96b73a       storage-provisioner                          kube-system
	d314ff38c3c6d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   be0c429e4f876       kindnet-qs9c4                                kube-system
	d989fb497c792       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   95a92c9404dd4       kube-proxy-tkh69                             kube-system
	ae1a79adc2be3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   467b9b337b158       coredns-66bc5c9577-p482f                     kube-system
	29525926ad924       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 12 minutes ago      Exited              kube-proxy                  0                   95a92c9404dd4       kube-proxy-tkh69                             kube-system
	95413f1012a31       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 12 minutes ago      Exited              kindnet-cni                 0                   be0c429e4f876       kindnet-qs9c4                                kube-system
	30f38be96cd65       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 12 minutes ago      Exited              kube-controller-manager     0                   9ec057b48b235       kube-controller-manager-functional-874021    kube-system
	be49dddb981f3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 12 minutes ago      Exited              kube-scheduler              0                   efa3d583e7e61       kube-scheduler-functional-874021             kube-system
	ccbc507ec1349       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Exited              etcd                        0                   d4edc6fea1a1f       etcd-functional-874021                       kube-system
	
	
	==> coredns [0603019cb4df95ecc1fcdac2f90f7db0af87ec2eca2c66c28e554affeabd6dbd] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40972 - 39995 "HINFO IN 6138872411836353657.1607739901210100902. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033381353s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ae1a79adc2be3fd910b98ec336d0e0ccfb5a0943c2724813bd27ed17cf18cdec] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51889 - 2491 "HINFO IN 5164227186937297676.492738699817190250. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.418979396s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-874021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-874021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=functional-874021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T11_35_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 11:35:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-874021
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 11:47:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 11:46:52 +0000   Sat, 18 Oct 2025 11:35:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 11:46:52 +0000   Sat, 18 Oct 2025 11:35:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 11:46:52 +0000   Sat, 18 Oct 2025 11:35:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 11:46:52 +0000   Sat, 18 Oct 2025 11:36:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-874021
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                fe7d01ec-edcd-4627-9615-8f13e88eb052
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7sqzt                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-dtkf8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-r7f94                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kube-system                 coredns-66bc5c9577-p482f                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-874021                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-qs9c4                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-874021              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-874021     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-tkh69                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-874021              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-gzh69    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4mmtt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-874021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-874021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-874021 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-874021 event: Registered Node functional-874021 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-874021 status is now: NodeReady
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-874021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-874021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-874021 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-874021 event: Registered Node functional-874021 in Controller
	
	
	==> dmesg <==
	[  +0.098201] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.055601] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.500112] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 11:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.040343] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.023874] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.023998] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.023847] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +2.047856] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +4.031738] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[Oct18 11:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[ +16.382621] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[ +32.253751] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	
	
	==> etcd [4c499f7e9378055680295cf53760ebc1ad84b46ae209c21f549cdf45602f091a] <==
	{"level":"warn","ts":"2025-10-18T11:37:08.663089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.672314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.679320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.685552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.691644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.698842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.706939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.713161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.720147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.728284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.737714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.743569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.750464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.756680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.764009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.771878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.777943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.792230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.799847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.807071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.825539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:37:08.839200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58058","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T11:47:08.389108Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1122}
	{"level":"info","ts":"2025-10-18T11:47:08.409543Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1122,"took":"19.99033ms","hash":533865530,"current-db-size-bytes":3416064,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-18T11:47:08.409597Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":533865530,"revision":1122,"compact-revision":-1}
	
	
	==> etcd [ccbc507ec13491e38e2c544a8c83831be0bac213eb1b289dd14e1bea03c77160] <==
	{"level":"warn","ts":"2025-10-18T11:35:20.411525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:35:20.417737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:35:20.427952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:35:20.431151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:35:20.438309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:35:20.444776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T11:35:20.496108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57604","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T11:36:45.723577Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T11:36:45.723652Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-874021","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-18T11:36:45.723738Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T11:36:45.725228Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T11:36:45.726628Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T11:36:45.726700Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T11:36:45.726700Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T11:36:45.726734Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T11:36:45.726723Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T11:36:45.726751Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T11:36:45.726684Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"error","ts":"2025-10-18T11:36:45.726791Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T11:36:45.726815Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-18T11:36:45.726831Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T11:36:45.728779Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-18T11:36:45.728843Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T11:36:45.728875Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-18T11:36:45.728884Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-874021","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 11:47:57 up 30 min,  0 user,  load average: 0.26, 0.32, 0.42
	Linux functional-874021 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [95413f1012a31b245c59754ebd3475167a696bb1b642d0160d557c825700051e] <==
	I1018 11:35:29.578250       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 11:35:29.578516       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1018 11:35:29.578637       1 main.go:148] setting mtu 1500 for CNI 
	I1018 11:35:29.578657       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 11:35:29.578676       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T11:35:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 11:35:29.780668       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 11:35:29.780690       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 11:35:29.780714       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 11:35:29.802108       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 11:35:59.781920       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 11:35:59.781923       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 11:35:59.781957       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 11:35:59.782005       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1018 11:36:01.281832       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 11:36:01.281861       1 metrics.go:72] Registering metrics
	I1018 11:36:01.281924       1 controller.go:711] "Syncing nftables rules"
	I1018 11:36:09.787849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:36:09.787911       1 main.go:301] handling current node
	I1018 11:36:19.788303       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:36:19.788508       1 main.go:301] handling current node
	I1018 11:36:29.784400       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:36:29.784437       1 main.go:301] handling current node
	
	
	==> kindnet [d314ff38c3c6debefbbe736c2cbc9ec4b7970ca4602607c9d7cadaedbaa1ae3d] <==
	I1018 11:45:55.902604       1 main.go:301] handling current node
	I1018 11:46:05.896614       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:46:05.896651       1 main.go:301] handling current node
	I1018 11:46:15.895148       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:46:15.895205       1 main.go:301] handling current node
	I1018 11:46:25.895836       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:46:25.895876       1 main.go:301] handling current node
	I1018 11:46:35.894963       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:46:35.894994       1 main.go:301] handling current node
	I1018 11:46:45.897409       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:46:45.897442       1 main.go:301] handling current node
	I1018 11:46:55.896803       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:46:55.896855       1 main.go:301] handling current node
	I1018 11:47:05.897894       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:47:05.897925       1 main.go:301] handling current node
	I1018 11:47:15.893868       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:47:15.893914       1 main.go:301] handling current node
	I1018 11:47:25.893943       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:47:25.893973       1 main.go:301] handling current node
	I1018 11:47:35.895588       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:47:35.895619       1 main.go:301] handling current node
	I1018 11:47:45.893888       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:47:45.893953       1 main.go:301] handling current node
	I1018 11:47:55.902856       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 11:47:55.902897       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e00c44b079e7f1cb5474757ecbc4fa470c60e4837d5d5e8058d05e53e4e8e335] <==
	I1018 11:37:09.391250       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 11:37:10.272839       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1018 11:37:10.479303       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1018 11:37:10.480481       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 11:37:10.484605       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 11:37:11.077839       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 11:37:11.169853       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 11:37:11.179755       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 11:37:11.233324       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 11:37:11.239260       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 11:37:13.060787       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 11:37:36.632845       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.140.86"}
	I1018 11:37:40.555198       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.255.109"}
	I1018 11:37:40.907507       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.32.14"}
	I1018 11:37:44.302027       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.106.99.15"}
	E1018 11:37:52.707125       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:42470: use of closed network connection
	E1018 11:37:53.373996       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:42476: use of closed network connection
	E1018 11:37:54.956304       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:42486: use of closed network connection
	I1018 11:37:55.232775       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.211.138"}
	E1018 11:38:04.496693       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56604: use of closed network connection
	I1018 11:38:11.296325       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 11:38:11.406991       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.151.142"}
	I1018 11:38:11.427163       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.108.45"}
	E1018 11:38:12.702590       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:47084: use of closed network connection
	I1018 11:47:09.292601       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [1f08b10f969b69aa00c88554c8ee2694b2ff43f2083fd14ce1a6aff75c95795a] <==
	I1018 11:37:12.787744       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 11:37:12.806172       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 11:37:12.807345       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 11:37:12.807363       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 11:37:12.807394       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 11:37:12.807400       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 11:37:12.807414       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 11:37:12.807452       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 11:37:12.807467       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 11:37:12.807488       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 11:37:12.807666       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 11:37:12.808291       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 11:37:12.810548       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 11:37:12.810593       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 11:37:12.812942       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 11:37:12.814142       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 11:37:12.820541       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 11:37:12.821615       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 11:37:12.825902       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	E1018 11:38:11.343835       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 11:38:11.347996       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 11:38:11.348056       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 11:38:11.353391       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 11:38:11.354828       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 11:38:11.359603       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [30f38be96cd6508342bf8fb35825ee2e5fb9462a6ae41eb169d70df366163ed9] <==
	I1018 11:35:27.893835       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 11:35:27.893854       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 11:35:27.893968       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 11:35:27.894052       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-874021"
	I1018 11:35:27.894110       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 11:35:27.894121       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 11:35:27.895146       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 11:35:27.895187       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 11:35:27.895233       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 11:35:27.895244       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 11:35:27.895299       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 11:35:27.895311       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 11:35:27.895301       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 11:35:27.895312       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 11:35:27.895311       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 11:35:27.895639       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 11:35:27.895953       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 11:35:27.899985       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 11:35:27.900031       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 11:35:27.900152       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 11:35:27.901307       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 11:35:27.907482       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 11:35:27.915820       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 11:35:27.918978       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 11:36:12.901563       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [29525926ad9247b151870627565f49dab51d96a1dd3a9e54f95e5eaf68d6a2d9] <==
	I1018 11:35:29.423199       1 server_linux.go:53] "Using iptables proxy"
	I1018 11:35:29.488435       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 11:35:29.589141       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 11:35:29.589179       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 11:35:29.589274       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 11:35:29.607916       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 11:35:29.607981       1 server_linux.go:132] "Using iptables Proxier"
	I1018 11:35:29.613294       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 11:35:29.613654       1 server.go:527] "Version info" version="v1.34.1"
	I1018 11:35:29.613697       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 11:35:29.615393       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 11:35:29.615417       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 11:35:29.615435       1 config.go:106] "Starting endpoint slice config controller"
	I1018 11:35:29.615441       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 11:35:29.615452       1 config.go:200] "Starting service config controller"
	I1018 11:35:29.615472       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 11:35:29.615519       1 config.go:309] "Starting node config controller"
	I1018 11:35:29.615900       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 11:35:29.715598       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 11:35:29.716738       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 11:35:29.716777       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 11:35:29.716786       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [d989fb497c7926e7538c01e1db052b07bde96eb8890feeb632b6e58f86e3baa2] <==
	E1018 11:36:35.630531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-874021&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 11:36:36.676088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-874021&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 11:36:39.475371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-874021&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 11:36:43.139149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-874021&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 11:37:01.201359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-874021&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1018 11:37:20.830635       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 11:37:20.830671       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 11:37:20.830740       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 11:37:20.849419       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 11:37:20.849477       1 server_linux.go:132] "Using iptables Proxier"
	I1018 11:37:20.854791       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 11:37:20.855113       1 server.go:527] "Version info" version="v1.34.1"
	I1018 11:37:20.855129       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 11:37:20.856265       1 config.go:106] "Starting endpoint slice config controller"
	I1018 11:37:20.856292       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 11:37:20.856339       1 config.go:309] "Starting node config controller"
	I1018 11:37:20.856349       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 11:37:20.856357       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 11:37:20.856355       1 config.go:200] "Starting service config controller"
	I1018 11:37:20.856366       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 11:37:20.856466       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 11:37:20.856488       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 11:37:20.956452       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 11:37:20.956506       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 11:37:20.956540       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [76d6899ab4ea5b279f5fe3e040ffecea8715242d094f8ede5fe758a4bfea4b44] <==
	I1018 11:37:08.195766       1 serving.go:386] Generated self-signed cert in-memory
	I1018 11:37:09.315095       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 11:37:09.315123       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 11:37:09.320670       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 11:37:09.320702       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 11:37:09.320718       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 11:37:09.320739       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 11:37:09.320744       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 11:37:09.320708       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 11:37:09.321023       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 11:37:09.321074       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 11:37:09.420895       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 11:37:09.420903       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 11:37:09.420919       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kube-scheduler [be49dddb981f3fa79d71a6c13c932ac34d5f8dcba180e1b4388fd9919cbe750e] <==
	E1018 11:35:20.900944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 11:35:20.900967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 11:35:21.778740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 11:35:21.819435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 11:35:21.820201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 11:35:21.924457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 11:35:21.977727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 11:35:22.003937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 11:35:22.014086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 11:35:22.082222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 11:35:22.088150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 11:35:22.095429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 11:35:22.105958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 11:35:22.107903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 11:35:22.126083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 11:35:22.148199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 11:35:22.149011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 11:35:22.154434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1018 11:35:23.798140       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 11:36:45.615186       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 11:36:45.615178       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 11:36:45.615280       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 11:36:45.615306       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 11:36:45.615340       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 11:36:45.615364       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 11:45:13 functional-874021 kubelet[4138]: E1018 11:45:13.222427    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dtkf8" podUID="c6f84277-bbe7-4694-ac1f-baa1ed1e1561"
	Oct 18 11:45:23 functional-874021 kubelet[4138]: E1018 11:45:23.222554    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7sqzt" podUID="a8800506-2981-4e47-86f9-2ab1c7261368"
	Oct 18 11:45:27 functional-874021 kubelet[4138]: E1018 11:45:27.222878    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dtkf8" podUID="c6f84277-bbe7-4694-ac1f-baa1ed1e1561"
	Oct 18 11:45:38 functional-874021 kubelet[4138]: E1018 11:45:38.224380    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7sqzt" podUID="a8800506-2981-4e47-86f9-2ab1c7261368"
	Oct 18 11:45:40 functional-874021 kubelet[4138]: E1018 11:45:40.222458    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dtkf8" podUID="c6f84277-bbe7-4694-ac1f-baa1ed1e1561"
	Oct 18 11:45:49 functional-874021 kubelet[4138]: E1018 11:45:49.222528    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7sqzt" podUID="a8800506-2981-4e47-86f9-2ab1c7261368"
	Oct 18 11:45:52 functional-874021 kubelet[4138]: E1018 11:45:52.222219    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dtkf8" podUID="c6f84277-bbe7-4694-ac1f-baa1ed1e1561"
	Oct 18 11:46:04 functional-874021 kubelet[4138]: E1018 11:46:04.222540    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7sqzt" podUID="a8800506-2981-4e47-86f9-2ab1c7261368"
	Oct 18 11:46:07 functional-874021 kubelet[4138]: E1018 11:46:07.222461    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dtkf8" podUID="c6f84277-bbe7-4694-ac1f-baa1ed1e1561"
	Oct 18 11:46:17 functional-874021 kubelet[4138]: E1018 11:46:17.221834    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7sqzt" podUID="a8800506-2981-4e47-86f9-2ab1c7261368"
	Oct 18 11:46:22 functional-874021 kubelet[4138]: E1018 11:46:22.222200    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dtkf8" podUID="c6f84277-bbe7-4694-ac1f-baa1ed1e1561"
	Oct 18 11:46:28 functional-874021 kubelet[4138]: E1018 11:46:28.222634    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7sqzt" podUID="a8800506-2981-4e47-86f9-2ab1c7261368"
	Oct 18 11:46:35 functional-874021 kubelet[4138]: E1018 11:46:35.221984    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dtkf8" podUID="c6f84277-bbe7-4694-ac1f-baa1ed1e1561"
	Oct 18 11:46:40 functional-874021 kubelet[4138]: E1018 11:46:40.222637    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7sqzt" podUID="a8800506-2981-4e47-86f9-2ab1c7261368"
	Oct 18 11:46:46 functional-874021 kubelet[4138]: E1018 11:46:46.223658    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dtkf8" podUID="c6f84277-bbe7-4694-ac1f-baa1ed1e1561"
	Oct 18 11:46:53 functional-874021 kubelet[4138]: E1018 11:46:53.222710    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7sqzt" podUID="a8800506-2981-4e47-86f9-2ab1c7261368"
	Oct 18 11:46:58 functional-874021 kubelet[4138]: E1018 11:46:58.223278    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dtkf8" podUID="c6f84277-bbe7-4694-ac1f-baa1ed1e1561"
	Oct 18 11:47:07 functional-874021 kubelet[4138]: E1018 11:47:07.222365    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7sqzt" podUID="a8800506-2981-4e47-86f9-2ab1c7261368"
	Oct 18 11:47:12 functional-874021 kubelet[4138]: E1018 11:47:12.222841    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dtkf8" podUID="c6f84277-bbe7-4694-ac1f-baa1ed1e1561"
	Oct 18 11:47:19 functional-874021 kubelet[4138]: E1018 11:47:19.222270    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7sqzt" podUID="a8800506-2981-4e47-86f9-2ab1c7261368"
	Oct 18 11:47:23 functional-874021 kubelet[4138]: E1018 11:47:23.222164    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dtkf8" podUID="c6f84277-bbe7-4694-ac1f-baa1ed1e1561"
	Oct 18 11:47:30 functional-874021 kubelet[4138]: E1018 11:47:30.222341    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7sqzt" podUID="a8800506-2981-4e47-86f9-2ab1c7261368"
	Oct 18 11:47:36 functional-874021 kubelet[4138]: E1018 11:47:36.221883    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dtkf8" podUID="c6f84277-bbe7-4694-ac1f-baa1ed1e1561"
	Oct 18 11:47:44 functional-874021 kubelet[4138]: E1018 11:47:44.224337    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7sqzt" podUID="a8800506-2981-4e47-86f9-2ab1c7261368"
	Oct 18 11:47:47 functional-874021 kubelet[4138]: E1018 11:47:47.222254    4138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-dtkf8" podUID="c6f84277-bbe7-4694-ac1f-baa1ed1e1561"
	
	
	==> kubernetes-dashboard [2c0774bb10721c2d86b0e7d1895050debf40f5e94ac2f017709e15e12bedc463] <==
	2025/10/18 11:38:14 Using namespace: kubernetes-dashboard
	2025/10/18 11:38:14 Using in-cluster config to connect to apiserver
	2025/10/18 11:38:14 Using secret token for csrf signing
	2025/10/18 11:38:14 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 11:38:14 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 11:38:14 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 11:38:14 Generating JWE encryption key
	2025/10/18 11:38:14 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 11:38:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 11:38:14 Initializing JWE encryption key from synchronized object
	2025/10/18 11:38:14 Creating in-cluster Sidecar client
	2025/10/18 11:38:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 11:38:14 Serving insecurely on HTTP port: 9090
	2025/10/18 11:38:44 Successful request to sidecar
	2025/10/18 11:38:14 Starting overwatch
	
	
	==> storage-provisioner [d5d280039c5c03caa7333e490dba03bd636740e47ace2d65f0356269bb662d79] <==
	W1018 11:47:33.280729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:35.284459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:35.288698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:37.291961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:37.295947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:39.299145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:39.304387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:41.308073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:41.313347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:43.316070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:43.320317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:45.323679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:45.327906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:47.331285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:47.335371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:49.338583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:49.343154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:51.345531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:51.349103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:53.352058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:53.357267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:55.361063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:55.364814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:57.367791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 11:47:57.372078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e8f6184ecbd4da92702481d3049d6d526d14f85cc3741e8b0defe9166e4b36c7] <==
	I1018 11:36:35.546705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 11:36:35.549946       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-874021 -n functional-874021
helpers_test.go:269: (dbg) Run:  kubectl --context functional-874021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-7sqzt hello-node-connect-7d85dfc575-dtkf8
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-874021 describe pod busybox-mount hello-node-75c85bcc94-7sqzt hello-node-connect-7d85dfc575-dtkf8
helpers_test.go:290: (dbg) kubectl --context functional-874021 describe pod busybox-mount hello-node-75c85bcc94-7sqzt hello-node-connect-7d85dfc575-dtkf8:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-874021/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 11:37:59 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://05450761c402bd252201f58ee4f5ee5364ba3ddea43e30797077c04a1b4dde6d
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 18 Oct 2025 11:38:01 +0000
	      Finished:     Sat, 18 Oct 2025 11:38:01 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jcvv2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-jcvv2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m58s  default-scheduler  Successfully assigned default/busybox-mount to functional-874021
	  Normal  Pulling    9m57s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m56s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.363s (1.363s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m56s  kubelet            Created container: mount-munger
	  Normal  Started    9m56s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-7sqzt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-874021/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 11:37:40 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-blrjm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-blrjm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7sqzt to functional-874021
	  Normal   Pulling    7m8s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m8s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m8s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m6s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    13s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-dtkf8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-874021/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 11:37:55 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfwln (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tfwln:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dtkf8 to functional-874021
	  Normal   Pulling    7m1s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m1s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m1s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m55s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m55s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-874021 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-874021 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-7sqzt" [a8800506-2981-4e47-86f9-2ab1c7261368] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-874021 -n functional-874021
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-18 11:47:41.230680859 +0000 UTC m=+1123.751191853
functional_test.go:1460: (dbg) Run:  kubectl --context functional-874021 describe po hello-node-75c85bcc94-7sqzt -n default
functional_test.go:1460: (dbg) kubectl --context functional-874021 describe po hello-node-75c85bcc94-7sqzt -n default:
Name:             hello-node-75c85bcc94-7sqzt
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-874021/192.168.49.2
Start Time:       Sat, 18 Oct 2025 11:37:40 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-blrjm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-blrjm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7sqzt to functional-874021
Normal   Pulling    6m52s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m52s (x5 over 9m55s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m52s (x5 over 9m55s)   kubelet            Error: ErrImagePull
Warning  Failed     4m50s (x20 over 9m54s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m38s (x21 over 9m54s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-874021 logs hello-node-75c85bcc94-7sqzt -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-874021 logs hello-node-75c85bcc94-7sqzt -n default: exit status 1 (71.536723ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-7sqzt" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-874021 logs hello-node-75c85bcc94-7sqzt -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image load --daemon kicbase/echo-server:functional-874021 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-874021" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image load --daemon kicbase/echo-server:functional-874021 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-874021 image load --daemon kicbase/echo-server:functional-874021 --alsologtostderr: (1.68853488s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-874021 image ls: (2.242406997s)
functional_test.go:461: expected "kicbase/echo-server:functional-874021" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-874021
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image load --daemon kicbase/echo-server:functional-874021 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-874021" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image save kicbase/echo-server:functional-874021 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1018 11:37:48.547661   43744 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:37:48.547942   43744 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:37:48.547953   43744 out.go:374] Setting ErrFile to fd 2...
	I1018 11:37:48.547959   43744 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:37:48.548179   43744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:37:48.548823   43744 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:37:48.548948   43744 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:37:48.549391   43744 cli_runner.go:164] Run: docker container inspect functional-874021 --format={{.State.Status}}
	I1018 11:37:48.567447   43744 ssh_runner.go:195] Run: systemctl --version
	I1018 11:37:48.567504   43744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-874021
	I1018 11:37:48.584254   43744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/functional-874021/id_rsa Username:docker}
	I1018 11:37:48.678165   43744 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1018 11:37:48.678241   43744 cache_images.go:254] Failed to load cached images for "functional-874021": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1018 11:37:48.678263   43744 cache_images.go:266] failed pushing to: functional-874021

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-874021
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image save --daemon kicbase/echo-server:functional-874021 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-874021
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-874021: exit status 1 (16.970415ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-874021

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-874021

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874021 service --namespace=default --https --url hello-node: exit status 115 (522.13983ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32674
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-874021 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874021 service hello-node --url --format={{.IP}}: exit status 115 (530.311665ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-874021 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874021 service hello-node --url: exit status 115 (530.144512ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32674
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-874021 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32674
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-508594 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-508594 --output=json --user=testUser: exit status 80 (1.594154068s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3060fee6-76e8-4a58-9982-d98380f4d344","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-508594 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"c5d959ef-96c1-478d-a5e6-9cbbf9c1d573","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-18T11:57:36Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"66062c41-ceb3-4bb0-9e10-f5e13d6a54f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-508594 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.13s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-508594 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-508594 --output=json --user=testUser: exit status 80 (2.134507223s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"86ec813a-da08-4881-b8e1-fd7ed8741cb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-508594 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"55925ee5-9afc-4b7d-86a9-47b72cb3fc3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-18T11:57:38Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"fcfe322f-eff5-469c-85bf-daa1ac7e7148","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-508594 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.13s)

                                                
                                    
x
+
TestPause/serial/Pause (5.34s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-647824 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-647824 --alsologtostderr -v=5: exit status 80 (1.892718911s)

                                                
                                                
-- stdout --
	* Pausing node pause-647824 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:12:55.071380  219381 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:12:55.071522  219381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:12:55.071532  219381 out.go:374] Setting ErrFile to fd 2...
	I1018 12:12:55.071536  219381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:12:55.071839  219381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:12:55.072210  219381 out.go:368] Setting JSON to false
	I1018 12:12:55.072265  219381 mustload.go:65] Loading cluster: pause-647824
	I1018 12:12:55.072828  219381 config.go:182] Loaded profile config "pause-647824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:12:55.073396  219381 cli_runner.go:164] Run: docker container inspect pause-647824 --format={{.State.Status}}
	I1018 12:12:55.094887  219381 host.go:66] Checking if "pause-647824" exists ...
	I1018 12:12:55.095159  219381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:12:55.159527  219381 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-18 12:12:55.148504569 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:12:55.160361  219381 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-647824 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 12:12:55.162313  219381 out.go:179] * Pausing node pause-647824 ... 
	I1018 12:12:55.163448  219381 host.go:66] Checking if "pause-647824" exists ...
	I1018 12:12:55.163693  219381 ssh_runner.go:195] Run: systemctl --version
	I1018 12:12:55.163738  219381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-647824
	I1018 12:12:55.184830  219381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/pause-647824/id_rsa Username:docker}
	I1018 12:12:55.284167  219381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:12:55.297960  219381 pause.go:52] kubelet running: true
	I1018 12:12:55.298048  219381 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:12:55.435674  219381 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:12:55.435835  219381 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:12:55.516860  219381 cri.go:89] found id: "50d28c2fe0ca4a46c8885f0c30960876544d95004f17a077450eb1e7cdf33f72"
	I1018 12:12:55.516886  219381 cri.go:89] found id: "d72362279dc6836f426195b211ae9ea30c1db7f56e5d8046c900d0db3968b27f"
	I1018 12:12:55.516892  219381 cri.go:89] found id: "7693a4b0811b4cb7b39df033c76c5943be6a6afbf1c6499a7bd53455af88b6e3"
	I1018 12:12:55.516898  219381 cri.go:89] found id: "923a555a15597718b9023a79c64c33ac9a6c4ec9c0d996444416ec59f9cd75a4"
	I1018 12:12:55.516902  219381 cri.go:89] found id: "732f379ae73442a4775be491b8dd0d68a5b265bd46131897bb85a5b96b71df7a"
	I1018 12:12:55.516910  219381 cri.go:89] found id: "3ae69f5a4535529976a01a5698f297e1d11e5abeba193acd90054a0e399f2c4c"
	I1018 12:12:55.516915  219381 cri.go:89] found id: "da8b8098a3fa9bab0c1c79e4b6ad487ef813d02d3ac2b77ee770dc454179bd1c"
	I1018 12:12:55.516919  219381 cri.go:89] found id: ""
	I1018 12:12:55.516962  219381 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:12:55.529151  219381 retry.go:31] will retry after 351.904024ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:12:55Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:12:55.882682  219381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:12:55.898681  219381 pause.go:52] kubelet running: false
	I1018 12:12:55.898731  219381 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:12:56.059803  219381 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:12:56.059890  219381 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:12:56.151364  219381 cri.go:89] found id: "50d28c2fe0ca4a46c8885f0c30960876544d95004f17a077450eb1e7cdf33f72"
	I1018 12:12:56.151393  219381 cri.go:89] found id: "d72362279dc6836f426195b211ae9ea30c1db7f56e5d8046c900d0db3968b27f"
	I1018 12:12:56.151399  219381 cri.go:89] found id: "7693a4b0811b4cb7b39df033c76c5943be6a6afbf1c6499a7bd53455af88b6e3"
	I1018 12:12:56.151405  219381 cri.go:89] found id: "923a555a15597718b9023a79c64c33ac9a6c4ec9c0d996444416ec59f9cd75a4"
	I1018 12:12:56.151410  219381 cri.go:89] found id: "732f379ae73442a4775be491b8dd0d68a5b265bd46131897bb85a5b96b71df7a"
	I1018 12:12:56.151416  219381 cri.go:89] found id: "3ae69f5a4535529976a01a5698f297e1d11e5abeba193acd90054a0e399f2c4c"
	I1018 12:12:56.151420  219381 cri.go:89] found id: "da8b8098a3fa9bab0c1c79e4b6ad487ef813d02d3ac2b77ee770dc454179bd1c"
	I1018 12:12:56.151424  219381 cri.go:89] found id: ""
	I1018 12:12:56.151470  219381 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:12:56.166197  219381 retry.go:31] will retry after 507.140905ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:12:56Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:12:56.673889  219381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:12:56.691211  219381 pause.go:52] kubelet running: false
	I1018 12:12:56.691324  219381 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:12:56.820406  219381 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:12:56.820474  219381 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:12:56.893740  219381 cri.go:89] found id: "50d28c2fe0ca4a46c8885f0c30960876544d95004f17a077450eb1e7cdf33f72"
	I1018 12:12:56.893778  219381 cri.go:89] found id: "d72362279dc6836f426195b211ae9ea30c1db7f56e5d8046c900d0db3968b27f"
	I1018 12:12:56.893784  219381 cri.go:89] found id: "7693a4b0811b4cb7b39df033c76c5943be6a6afbf1c6499a7bd53455af88b6e3"
	I1018 12:12:56.893789  219381 cri.go:89] found id: "923a555a15597718b9023a79c64c33ac9a6c4ec9c0d996444416ec59f9cd75a4"
	I1018 12:12:56.893794  219381 cri.go:89] found id: "732f379ae73442a4775be491b8dd0d68a5b265bd46131897bb85a5b96b71df7a"
	I1018 12:12:56.893798  219381 cri.go:89] found id: "3ae69f5a4535529976a01a5698f297e1d11e5abeba193acd90054a0e399f2c4c"
	I1018 12:12:56.893802  219381 cri.go:89] found id: "da8b8098a3fa9bab0c1c79e4b6ad487ef813d02d3ac2b77ee770dc454179bd1c"
	I1018 12:12:56.893806  219381 cri.go:89] found id: ""
	I1018 12:12:56.893859  219381 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:12:56.908156  219381 out.go:203] 
	W1018 12:12:56.909612  219381 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:12:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:12:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:12:56.909629  219381 out.go:285] * 
	* 
	W1018 12:12:56.913550  219381 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:12:56.915209  219381 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-647824 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-647824
helpers_test.go:243: (dbg) docker inspect pause-647824:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "38a39005943ee51df2f91c52d47c2d0fc17be2d6069b03a0003e078f84196dd9",
	        "Created": "2025-10-18T12:12:11.814838723Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 209549,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:12:11.85272587Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/38a39005943ee51df2f91c52d47c2d0fc17be2d6069b03a0003e078f84196dd9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/38a39005943ee51df2f91c52d47c2d0fc17be2d6069b03a0003e078f84196dd9/hostname",
	        "HostsPath": "/var/lib/docker/containers/38a39005943ee51df2f91c52d47c2d0fc17be2d6069b03a0003e078f84196dd9/hosts",
	        "LogPath": "/var/lib/docker/containers/38a39005943ee51df2f91c52d47c2d0fc17be2d6069b03a0003e078f84196dd9/38a39005943ee51df2f91c52d47c2d0fc17be2d6069b03a0003e078f84196dd9-json.log",
	        "Name": "/pause-647824",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-647824:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-647824",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "38a39005943ee51df2f91c52d47c2d0fc17be2d6069b03a0003e078f84196dd9",
	                "LowerDir": "/var/lib/docker/overlay2/23fc88c601ee5d0e0a3dcce16b9373c585f2bf6fe174c66b9f61bba3e863c182-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23fc88c601ee5d0e0a3dcce16b9373c585f2bf6fe174c66b9f61bba3e863c182/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23fc88c601ee5d0e0a3dcce16b9373c585f2bf6fe174c66b9f61bba3e863c182/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23fc88c601ee5d0e0a3dcce16b9373c585f2bf6fe174c66b9f61bba3e863c182/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-647824",
	                "Source": "/var/lib/docker/volumes/pause-647824/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-647824",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-647824",
	                "name.minikube.sigs.k8s.io": "pause-647824",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb93ebc062837bd2196c3954216bc5b047f781ebf63690caffd190d02c7300f9",
	            "SandboxKey": "/var/run/docker/netns/eb93ebc06283",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-647824": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:31:25:fd:43:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "87aa95497c09c1fc780f85f105c8dd45dcb390d94675b2f3b0efd8f69f220fe8",
	                    "EndpointID": "107f82c4d82a2e52303e03057d237d944365697f64ab55f972a409397debf930",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-647824",
	                        "38a39005943e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-647824 -n pause-647824
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-647824 -n pause-647824: exit status 2 (331.17705ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-647824 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-376567 sudo crio config                                                                                                                                                                                         │ cilium-376567             │ jenkins │ v1.37.0 │ 18 Oct 25 12:10 UTC │                     │
	│ delete  │ -p cilium-376567                                                                                                                                                                                                          │ cilium-376567             │ jenkins │ v1.37.0 │ 18 Oct 25 12:10 UTC │ 18 Oct 25 12:10 UTC │
	│ start   │ -p force-systemd-env-297456 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-297456  │ jenkins │ v1.37.0 │ 18 Oct 25 12:10 UTC │ 18 Oct 25 12:10 UTC │
	│ stop    │ -p kubernetes-upgrade-291565                                                                                                                                                                                              │ kubernetes-upgrade-291565 │ jenkins │ v1.37.0 │ 18 Oct 25 12:10 UTC │ 18 Oct 25 12:10 UTC │
	│ start   │ -p kubernetes-upgrade-291565 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                  │ kubernetes-upgrade-291565 │ jenkins │ v1.37.0 │ 18 Oct 25 12:10 UTC │                     │
	│ delete  │ -p force-systemd-env-297456                                                                                                                                                                                               │ force-systemd-env-297456  │ jenkins │ v1.37.0 │ 18 Oct 25 12:10 UTC │ 18 Oct 25 12:10 UTC │
	│ start   │ -p force-systemd-flag-328756 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-328756 │ jenkins │ v1.37.0 │ 18 Oct 25 12:10 UTC │ 18 Oct 25 12:11 UTC │
	│ ssh     │ force-systemd-flag-328756 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-328756 │ jenkins │ v1.37.0 │ 18 Oct 25 12:11 UTC │ 18 Oct 25 12:11 UTC │
	│ delete  │ -p force-systemd-flag-328756                                                                                                                                                                                              │ force-systemd-flag-328756 │ jenkins │ v1.37.0 │ 18 Oct 25 12:11 UTC │ 18 Oct 25 12:11 UTC │
	│ start   │ -p cert-expiration-382425 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-382425    │ jenkins │ v1.37.0 │ 18 Oct 25 12:11 UTC │ 18 Oct 25 12:11 UTC │
	│ delete  │ -p offline-crio-285533                                                                                                                                                                                                    │ offline-crio-285533       │ jenkins │ v1.37.0 │ 18 Oct 25 12:11 UTC │ 18 Oct 25 12:11 UTC │
	│ start   │ -p cert-options-473888 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-473888       │ jenkins │ v1.37.0 │ 18 Oct 25 12:11 UTC │ 18 Oct 25 12:12 UTC │
	│ delete  │ -p missing-upgrade-306315                                                                                                                                                                                                 │ missing-upgrade-306315    │ jenkins │ v1.37.0 │ 18 Oct 25 12:11 UTC │ 18 Oct 25 12:11 UTC │
	│ start   │ -p stopped-upgrade-881970 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-881970    │ jenkins │ v1.32.0 │ 18 Oct 25 12:11 UTC │ 18 Oct 25 12:12 UTC │
	│ ssh     │ cert-options-473888 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-473888       │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ ssh     │ -p cert-options-473888 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-473888       │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ delete  │ -p cert-options-473888                                                                                                                                                                                                    │ cert-options-473888       │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ stop    │ stopped-upgrade-881970 stop                                                                                                                                                                                               │ stopped-upgrade-881970    │ jenkins │ v1.32.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ start   │ -p pause-647824 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-647824              │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ start   │ -p stopped-upgrade-881970 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ stopped-upgrade-881970    │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ delete  │ -p stopped-upgrade-881970                                                                                                                                                                                                 │ stopped-upgrade-881970    │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ start   │ -p running-upgrade-054724 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-054724    │ jenkins │ v1.32.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ start   │ -p pause-647824 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-647824              │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ pause   │ -p pause-647824 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-647824              │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │                     │
	│ start   │ -p running-upgrade-054724 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ running-upgrade-054724    │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:12:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:12:56.558792  219764 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:12:56.558919  219764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:12:56.558928  219764 out.go:374] Setting ErrFile to fd 2...
	I1018 12:12:56.558932  219764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:12:56.559120  219764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:12:56.559545  219764 out.go:368] Setting JSON to false
	I1018 12:12:56.560669  219764 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3325,"bootTime":1760786252,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:12:56.560746  219764 start.go:141] virtualization: kvm guest
	I1018 12:12:56.562971  219764 out.go:179] * [running-upgrade-054724] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:12:56.564395  219764 notify.go:220] Checking for updates...
	I1018 12:12:56.564422  219764 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:12:56.565989  219764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:12:56.567527  219764 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:12:56.568901  219764 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:12:56.570224  219764 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:12:56.571644  219764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:12:56.573472  219764 config.go:182] Loaded profile config "running-upgrade-054724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1018 12:12:56.575427  219764 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1018 12:12:56.576820  219764 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:12:56.601240  219764 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:12:56.601411  219764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:12:56.659595  219764 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-18 12:12:56.649088587 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:12:56.659692  219764 docker.go:318] overlay module found
	I1018 12:12:56.661718  219764 out.go:179] * Using the docker driver based on existing profile
	I1018 12:12:56.663499  219764 start.go:305] selected driver: docker
	I1018 12:12:56.663519  219764 start.go:925] validating driver "docker" against &{Name:running-upgrade-054724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:running-upgrade-054724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:12:56.663621  219764 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:12:56.664231  219764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:12:56.726441  219764 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-18 12:12:56.715805121 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:12:56.726821  219764 cni.go:84] Creating CNI manager for ""
	I1018 12:12:56.726894  219764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:12:56.726945  219764 start.go:349] cluster config:
	{Name:running-upgrade-054724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:running-upgrade-054724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:12:56.734317  219764 out.go:179] * Starting "running-upgrade-054724" primary control-plane node in "running-upgrade-054724" cluster
	I1018 12:12:56.735805  219764 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:12:56.737704  219764 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:12:56.739366  219764 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1018 12:12:56.739422  219764 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1018 12:12:56.739450  219764 cache.go:58] Caching tarball of preloaded images
	I1018 12:12:56.739521  219764 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1018 12:12:56.739571  219764 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:12:56.739584  219764 cache.go:61] Finished verifying existence of preloaded tar for v1.28.3 on crio
	I1018 12:12:56.739710  219764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/running-upgrade-054724/config.json ...
	I1018 12:12:56.764589  219764 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1018 12:12:56.764620  219764 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1018 12:12:56.764643  219764 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:12:56.764681  219764 start.go:360] acquireMachinesLock for running-upgrade-054724: {Name:mk85db4f56b1972aacbb213c12172e1e6422ebc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:12:56.764776  219764 start.go:364] duration metric: took 59.002µs to acquireMachinesLock for "running-upgrade-054724"
	I1018 12:12:56.764801  219764 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:12:56.764810  219764 fix.go:54] fixHost starting: 
	I1018 12:12:56.765089  219764 cli_runner.go:164] Run: docker container inspect running-upgrade-054724 --format={{.State.Status}}
	I1018 12:12:56.783809  219764 fix.go:112] recreateIfNeeded on running-upgrade-054724: state=Running err=<nil>
	W1018 12:12:56.783845  219764 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.805918328Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.806740366Z" level=info msg="Conmon does support the --sync option"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.806784082Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.806802851Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.807537819Z" level=info msg="Conmon does support the --sync option"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.807557409Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.814060472Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.814084532Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.814648126Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.815112033Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.815177209Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.821082382Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.864740212Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-lgchh Namespace:kube-system ID:7783eee96a1428170d0150ab797cf48616fcac8f120c38a962bcd0340cd56cd3 UID:5cf52ce4-8646-445e-a78d-f3c858ad3b42 NetNS:/var/run/netns/d1f0b44b-49c1-470d-9b9b-4462c60518a5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a530}] Aliases:map[]}"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.864963511Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-lgchh for CNI network kindnet (type=ptp)"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.866053426Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.866114644Z" level=info msg="Starting seccomp notifier watcher"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.866192753Z" level=info msg="Create NRI interface"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.86670909Z" level=info msg="built-in NRI default validator is disabled"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.866729698Z" level=info msg="runtime interface created"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.86674042Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.866746176Z" level=info msg="runtime interface starting up..."
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.866751831Z" level=info msg="starting plugins..."
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.866783476Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.867059623Z" level=info msg="No systemd watchdog enabled"
	Oct 18 12:12:51 pause-647824 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	50d28c2fe0ca4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   7783eee96a142       coredns-66bc5c9577-lgchh               kube-system
	d72362279dc68       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   d18237eedf606       kindnet-m74rm                          kube-system
	7693a4b0811b4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   23 seconds ago      Running             kube-proxy                0                   2b6094a700eb0       kube-proxy-748x7                       kube-system
	923a555a15597       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   33 seconds ago      Running             etcd                      0                   9a503fd3df735       etcd-pause-647824                      kube-system
	732f379ae7344       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   33 seconds ago      Running             kube-apiserver            0                   a78e6a3a973b9       kube-apiserver-pause-647824            kube-system
	3ae69f5a45355       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   33 seconds ago      Running             kube-controller-manager   0                   e5c2dde39efd4       kube-controller-manager-pause-647824   kube-system
	da8b8098a3fa9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   33 seconds ago      Running             kube-scheduler            0                   4136d165988d3       kube-scheduler-pause-647824            kube-system
	
	
	==> coredns [50d28c2fe0ca4a46c8885f0c30960876544d95004f17a077450eb1e7cdf33f72] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59132 - 3645 "HINFO IN 5363780246764631750.6404586236968160047. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015663774s
	
	
	==> describe nodes <==
	Name:               pause-647824
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-647824
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=pause-647824
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_12_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:12:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-647824
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:12:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:12:48 +0000   Sat, 18 Oct 2025 12:12:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:12:48 +0000   Sat, 18 Oct 2025 12:12:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:12:48 +0000   Sat, 18 Oct 2025 12:12:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:12:48 +0000   Sat, 18 Oct 2025 12:12:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-647824
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                4658715e-a8fc-4a71-af59-e84a6cd0365c
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-lgchh                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-pause-647824                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-m74rm                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-pause-647824             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-pause-647824    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-748x7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-pause-647824             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node pause-647824 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node pause-647824 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node pause-647824 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node pause-647824 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node pause-647824 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node pause-647824 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node pause-647824 event: Registered Node pause-647824 in Controller
	  Normal  NodeReady                13s                kubelet          Node pause-647824 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.098201] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.055601] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.500112] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 11:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.040343] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.023874] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.023998] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.023847] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +2.047856] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +4.031738] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[Oct18 11:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[ +16.382621] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[ +32.253751] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	
	
	==> etcd [923a555a15597718b9023a79c64c33ac9a6c4ec9c0d996444416ec59f9cd75a4] <==
	{"level":"warn","ts":"2025-10-18T12:12:24.964479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:24.972659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:24.979697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:24.986396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:24.994360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.002399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.010163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.017531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.025217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.033733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.041699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.050561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.058998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.067323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.075571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.083193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.089722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.097789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.105787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.121161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.128875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.135720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.191848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40558","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:12:40.204101Z","caller":"traceutil/trace.go:172","msg":"trace[163515568] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"118.707331ms","start":"2025-10-18T12:12:40.085370Z","end":"2025-10-18T12:12:40.204078Z","steps":["trace[163515568] 'process raft request'  (duration: 118.566834ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:12:42.157962Z","caller":"traceutil/trace.go:172","msg":"trace[1347152765] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"134.046708ms","start":"2025-10-18T12:12:42.023896Z","end":"2025-10-18T12:12:42.157943Z","steps":["trace[1347152765] 'process raft request'  (duration: 133.883932ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:12:58 up 55 min,  0 user,  load average: 3.40, 2.47, 1.60
	Linux pause-647824 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d72362279dc6836f426195b211ae9ea30c1db7f56e5d8046c900d0db3968b27f] <==
	I1018 12:12:34.410499       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:12:34.410739       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 12:12:34.410897       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:12:34.410914       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:12:34.410937       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:12:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:12:34.613800       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:12:34.613892       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:12:34.613910       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:12:34.614045       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:12:34.910014       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:12:34.910223       1 metrics.go:72] Registering metrics
	I1018 12:12:34.910318       1 controller.go:711] "Syncing nftables rules"
	I1018 12:12:44.615202       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 12:12:44.615242       1 main.go:301] handling current node
	I1018 12:12:54.617954       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 12:12:54.617985       1 main.go:301] handling current node
	
	
	==> kube-apiserver [732f379ae73442a4775be491b8dd0d68a5b265bd46131897bb85a5b96b71df7a] <==
	I1018 12:12:25.719251       1 cache.go:39] Caches are synced for autoregister controller
	I1018 12:12:25.719443       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:12:25.719555       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 12:12:25.722498       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 12:12:25.725657       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:12:25.730952       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:12:25.731690       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 12:12:25.749223       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:12:26.621491       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 12:12:26.625112       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 12:12:26.625130       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:12:27.150573       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:12:27.196223       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:12:27.328627       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 12:12:27.335159       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1018 12:12:27.336313       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:12:27.341151       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:12:27.658814       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:12:28.337616       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:12:28.347576       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 12:12:28.356295       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:12:33.313561       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:12:33.318423       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:12:33.561437       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:12:33.764602       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [3ae69f5a4535529976a01a5698f297e1d11e5abeba193acd90054a0e399f2c4c] <==
	I1018 12:12:32.659078       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 12:12:32.659200       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 12:12:32.659263       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 12:12:32.659281       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:12:32.659352       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 12:12:32.659384       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 12:12:32.659205       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 12:12:32.659587       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:12:32.659685       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 12:12:32.659790       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 12:12:32.660939       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 12:12:32.661055       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 12:12:32.662226       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:12:32.662250       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 12:12:32.663265       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:12:32.663622       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 12:12:32.664025       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 12:12:32.664069       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 12:12:32.664094       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 12:12:32.664103       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 12:12:32.664107       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 12:12:32.668598       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 12:12:32.670676       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-647824" podCIDRs=["10.244.0.0/24"]
	I1018 12:12:32.680812       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:12:47.611224       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7693a4b0811b4cb7b39df033c76c5943be6a6afbf1c6499a7bd53455af88b6e3] <==
	I1018 12:12:34.195007       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:12:34.273083       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:12:34.373698       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:12:34.373784       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 12:12:34.373890       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:12:34.393103       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:12:34.393163       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:12:34.398537       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:12:34.398973       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:12:34.399000       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:12:34.402564       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:12:34.402585       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:12:34.402609       1 config.go:200] "Starting service config controller"
	I1018 12:12:34.402614       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:12:34.402639       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:12:34.402645       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:12:34.402663       1 config.go:309] "Starting node config controller"
	I1018 12:12:34.402675       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:12:34.402681       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:12:34.503671       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:12:34.503681       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:12:34.503720       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [da8b8098a3fa9bab0c1c79e4b6ad487ef813d02d3ac2b77ee770dc454179bd1c] <==
	E1018 12:12:25.683862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:12:25.683893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:12:25.683935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:12:25.683993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:12:25.684073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:12:25.684100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:12:25.684008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:12:25.684187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:12:25.684257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:12:25.684347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:12:25.684385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:12:26.530814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:12:26.562925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:12:26.568914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:12:26.575085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:12:26.596368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:12:26.791446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:12:26.795490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:12:26.832907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:12:26.840935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:12:26.852314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:12:26.862415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:12:26.883107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:12:26.954864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1018 12:12:29.680375       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:12:33 pause-647824 kubelet[1359]: I1018 12:12:33.880980    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9a7dea6c-73f6-4d37-8f57-c89b11b7f7a4-cni-cfg\") pod \"kindnet-m74rm\" (UID: \"9a7dea6c-73f6-4d37-8f57-c89b11b7f7a4\") " pod="kube-system/kindnet-m74rm"
	Oct 18 12:12:33 pause-647824 kubelet[1359]: I1018 12:12:33.881003    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a7dea6c-73f6-4d37-8f57-c89b11b7f7a4-xtables-lock\") pod \"kindnet-m74rm\" (UID: \"9a7dea6c-73f6-4d37-8f57-c89b11b7f7a4\") " pod="kube-system/kindnet-m74rm"
	Oct 18 12:12:33 pause-647824 kubelet[1359]: I1018 12:12:33.881021    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/25f830f2-5286-4c11-927d-7e766cc6fa2c-kube-proxy\") pod \"kube-proxy-748x7\" (UID: \"25f830f2-5286-4c11-927d-7e766cc6fa2c\") " pod="kube-system/kube-proxy-748x7"
	Oct 18 12:12:33 pause-647824 kubelet[1359]: I1018 12:12:33.881084    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25f830f2-5286-4c11-927d-7e766cc6fa2c-xtables-lock\") pod \"kube-proxy-748x7\" (UID: \"25f830f2-5286-4c11-927d-7e766cc6fa2c\") " pod="kube-system/kube-proxy-748x7"
	Oct 18 12:12:33 pause-647824 kubelet[1359]: I1018 12:12:33.881156    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jv4q\" (UniqueName: \"kubernetes.io/projected/25f830f2-5286-4c11-927d-7e766cc6fa2c-kube-api-access-7jv4q\") pod \"kube-proxy-748x7\" (UID: \"25f830f2-5286-4c11-927d-7e766cc6fa2c\") " pod="kube-system/kube-proxy-748x7"
	Oct 18 12:12:33 pause-647824 kubelet[1359]: I1018 12:12:33.881185    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a7dea6c-73f6-4d37-8f57-c89b11b7f7a4-lib-modules\") pod \"kindnet-m74rm\" (UID: \"9a7dea6c-73f6-4d37-8f57-c89b11b7f7a4\") " pod="kube-system/kindnet-m74rm"
	Oct 18 12:12:33 pause-647824 kubelet[1359]: I1018 12:12:33.881207    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr6gk\" (UniqueName: \"kubernetes.io/projected/9a7dea6c-73f6-4d37-8f57-c89b11b7f7a4-kube-api-access-gr6gk\") pod \"kindnet-m74rm\" (UID: \"9a7dea6c-73f6-4d37-8f57-c89b11b7f7a4\") " pod="kube-system/kindnet-m74rm"
	Oct 18 12:12:34 pause-647824 kubelet[1359]: I1018 12:12:34.227181    1359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-m74rm" podStartSLOduration=1.227155944 podStartE2EDuration="1.227155944s" podCreationTimestamp="2025-10-18 12:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:12:34.205461939 +0000 UTC m=+6.128400864" watchObservedRunningTime="2025-10-18 12:12:34.227155944 +0000 UTC m=+6.150094852"
	Oct 18 12:12:38 pause-647824 kubelet[1359]: I1018 12:12:38.345712    1359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-748x7" podStartSLOduration=5.345683201 podStartE2EDuration="5.345683201s" podCreationTimestamp="2025-10-18 12:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:12:34.227079512 +0000 UTC m=+6.150018420" watchObservedRunningTime="2025-10-18 12:12:38.345683201 +0000 UTC m=+10.268622125"
	Oct 18 12:12:44 pause-647824 kubelet[1359]: I1018 12:12:44.961116    1359 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 12:12:45 pause-647824 kubelet[1359]: I1018 12:12:45.065354    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5cf52ce4-8646-445e-a78d-f3c858ad3b42-config-volume\") pod \"coredns-66bc5c9577-lgchh\" (UID: \"5cf52ce4-8646-445e-a78d-f3c858ad3b42\") " pod="kube-system/coredns-66bc5c9577-lgchh"
	Oct 18 12:12:45 pause-647824 kubelet[1359]: I1018 12:12:45.065440    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqqtc\" (UniqueName: \"kubernetes.io/projected/5cf52ce4-8646-445e-a78d-f3c858ad3b42-kube-api-access-mqqtc\") pod \"coredns-66bc5c9577-lgchh\" (UID: \"5cf52ce4-8646-445e-a78d-f3c858ad3b42\") " pod="kube-system/coredns-66bc5c9577-lgchh"
	Oct 18 12:12:46 pause-647824 kubelet[1359]: I1018 12:12:46.231687    1359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lgchh" podStartSLOduration=13.23166187 podStartE2EDuration="13.23166187s" podCreationTimestamp="2025-10-18 12:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:12:46.231501121 +0000 UTC m=+18.154440029" watchObservedRunningTime="2025-10-18 12:12:46.23166187 +0000 UTC m=+18.154600780"
	Oct 18 12:12:50 pause-647824 kubelet[1359]: W1018 12:12:50.165742    1359 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 18 12:12:50 pause-647824 kubelet[1359]: E1018 12:12:50.166332    1359 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Oct 18 12:12:50 pause-647824 kubelet[1359]: E1018 12:12:50.166435    1359 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 18 12:12:50 pause-647824 kubelet[1359]: E1018 12:12:50.166452    1359 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 18 12:12:50 pause-647824 kubelet[1359]: E1018 12:12:50.166464    1359 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 18 12:12:50 pause-647824 kubelet[1359]: E1018 12:12:50.226310    1359 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 18 12:12:50 pause-647824 kubelet[1359]: E1018 12:12:50.226360    1359 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 18 12:12:50 pause-647824 kubelet[1359]: E1018 12:12:50.226374    1359 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 18 12:12:55 pause-647824 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 12:12:55 pause-647824 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 12:12:55 pause-647824 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 12:12:55 pause-647824 systemd[1]: kubelet.service: Consumed 1.222s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-647824 -n pause-647824
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-647824 -n pause-647824: exit status 2 (327.27532ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-647824 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-647824
helpers_test.go:243: (dbg) docker inspect pause-647824:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "38a39005943ee51df2f91c52d47c2d0fc17be2d6069b03a0003e078f84196dd9",
	        "Created": "2025-10-18T12:12:11.814838723Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 209549,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:12:11.85272587Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/38a39005943ee51df2f91c52d47c2d0fc17be2d6069b03a0003e078f84196dd9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/38a39005943ee51df2f91c52d47c2d0fc17be2d6069b03a0003e078f84196dd9/hostname",
	        "HostsPath": "/var/lib/docker/containers/38a39005943ee51df2f91c52d47c2d0fc17be2d6069b03a0003e078f84196dd9/hosts",
	        "LogPath": "/var/lib/docker/containers/38a39005943ee51df2f91c52d47c2d0fc17be2d6069b03a0003e078f84196dd9/38a39005943ee51df2f91c52d47c2d0fc17be2d6069b03a0003e078f84196dd9-json.log",
	        "Name": "/pause-647824",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-647824:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-647824",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "38a39005943ee51df2f91c52d47c2d0fc17be2d6069b03a0003e078f84196dd9",
	                "LowerDir": "/var/lib/docker/overlay2/23fc88c601ee5d0e0a3dcce16b9373c585f2bf6fe174c66b9f61bba3e863c182-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23fc88c601ee5d0e0a3dcce16b9373c585f2bf6fe174c66b9f61bba3e863c182/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23fc88c601ee5d0e0a3dcce16b9373c585f2bf6fe174c66b9f61bba3e863c182/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23fc88c601ee5d0e0a3dcce16b9373c585f2bf6fe174c66b9f61bba3e863c182/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-647824",
	                "Source": "/var/lib/docker/volumes/pause-647824/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-647824",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-647824",
	                "name.minikube.sigs.k8s.io": "pause-647824",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb93ebc062837bd2196c3954216bc5b047f781ebf63690caffd190d02c7300f9",
	            "SandboxKey": "/var/run/docker/netns/eb93ebc06283",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-647824": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:31:25:fd:43:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "87aa95497c09c1fc780f85f105c8dd45dcb390d94675b2f3b0efd8f69f220fe8",
	                    "EndpointID": "107f82c4d82a2e52303e03057d237d944365697f64ab55f972a409397debf930",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-647824",
	                        "38a39005943e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-647824 -n pause-647824
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-647824 -n pause-647824: exit status 2 (321.572653ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-647824 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-376567 sudo crio config                                                                                                                                                                                         │ cilium-376567             │ jenkins │ v1.37.0 │ 18 Oct 25 12:10 UTC │                     │
	│ delete  │ -p cilium-376567                                                                                                                                                                                                          │ cilium-376567             │ jenkins │ v1.37.0 │ 18 Oct 25 12:10 UTC │ 18 Oct 25 12:10 UTC │
	│ start   │ -p force-systemd-env-297456 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-297456  │ jenkins │ v1.37.0 │ 18 Oct 25 12:10 UTC │ 18 Oct 25 12:10 UTC │
	│ stop    │ -p kubernetes-upgrade-291565                                                                                                                                                                                              │ kubernetes-upgrade-291565 │ jenkins │ v1.37.0 │ 18 Oct 25 12:10 UTC │ 18 Oct 25 12:10 UTC │
	│ start   │ -p kubernetes-upgrade-291565 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                  │ kubernetes-upgrade-291565 │ jenkins │ v1.37.0 │ 18 Oct 25 12:10 UTC │                     │
	│ delete  │ -p force-systemd-env-297456                                                                                                                                                                                               │ force-systemd-env-297456  │ jenkins │ v1.37.0 │ 18 Oct 25 12:10 UTC │ 18 Oct 25 12:10 UTC │
	│ start   │ -p force-systemd-flag-328756 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-328756 │ jenkins │ v1.37.0 │ 18 Oct 25 12:10 UTC │ 18 Oct 25 12:11 UTC │
	│ ssh     │ force-systemd-flag-328756 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-328756 │ jenkins │ v1.37.0 │ 18 Oct 25 12:11 UTC │ 18 Oct 25 12:11 UTC │
	│ delete  │ -p force-systemd-flag-328756                                                                                                                                                                                              │ force-systemd-flag-328756 │ jenkins │ v1.37.0 │ 18 Oct 25 12:11 UTC │ 18 Oct 25 12:11 UTC │
	│ start   │ -p cert-expiration-382425 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-382425    │ jenkins │ v1.37.0 │ 18 Oct 25 12:11 UTC │ 18 Oct 25 12:11 UTC │
	│ delete  │ -p offline-crio-285533                                                                                                                                                                                                    │ offline-crio-285533       │ jenkins │ v1.37.0 │ 18 Oct 25 12:11 UTC │ 18 Oct 25 12:11 UTC │
	│ start   │ -p cert-options-473888 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-473888       │ jenkins │ v1.37.0 │ 18 Oct 25 12:11 UTC │ 18 Oct 25 12:12 UTC │
	│ delete  │ -p missing-upgrade-306315                                                                                                                                                                                                 │ missing-upgrade-306315    │ jenkins │ v1.37.0 │ 18 Oct 25 12:11 UTC │ 18 Oct 25 12:11 UTC │
	│ start   │ -p stopped-upgrade-881970 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-881970    │ jenkins │ v1.32.0 │ 18 Oct 25 12:11 UTC │ 18 Oct 25 12:12 UTC │
	│ ssh     │ cert-options-473888 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-473888       │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ ssh     │ -p cert-options-473888 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-473888       │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ delete  │ -p cert-options-473888                                                                                                                                                                                                    │ cert-options-473888       │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ stop    │ stopped-upgrade-881970 stop                                                                                                                                                                                               │ stopped-upgrade-881970    │ jenkins │ v1.32.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ start   │ -p pause-647824 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-647824              │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ start   │ -p stopped-upgrade-881970 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ stopped-upgrade-881970    │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ delete  │ -p stopped-upgrade-881970                                                                                                                                                                                                 │ stopped-upgrade-881970    │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ start   │ -p running-upgrade-054724 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-054724    │ jenkins │ v1.32.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ start   │ -p pause-647824 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-647824              │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │ 18 Oct 25 12:12 UTC │
	│ pause   │ -p pause-647824 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-647824              │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │                     │
	│ start   │ -p running-upgrade-054724 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ running-upgrade-054724    │ jenkins │ v1.37.0 │ 18 Oct 25 12:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:12:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:12:56.558792  219764 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:12:56.558919  219764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:12:56.558928  219764 out.go:374] Setting ErrFile to fd 2...
	I1018 12:12:56.558932  219764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:12:56.559120  219764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:12:56.559545  219764 out.go:368] Setting JSON to false
	I1018 12:12:56.560669  219764 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3325,"bootTime":1760786252,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:12:56.560746  219764 start.go:141] virtualization: kvm guest
	I1018 12:12:56.562971  219764 out.go:179] * [running-upgrade-054724] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:12:56.564395  219764 notify.go:220] Checking for updates...
	I1018 12:12:56.564422  219764 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:12:56.565989  219764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:12:56.567527  219764 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:12:56.568901  219764 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:12:56.570224  219764 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:12:56.571644  219764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:12:56.573472  219764 config.go:182] Loaded profile config "running-upgrade-054724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1018 12:12:56.575427  219764 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1018 12:12:56.576820  219764 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:12:56.601240  219764 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:12:56.601411  219764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:12:56.659595  219764 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-18 12:12:56.649088587 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:12:56.659692  219764 docker.go:318] overlay module found
	I1018 12:12:56.661718  219764 out.go:179] * Using the docker driver based on existing profile
	I1018 12:12:56.663499  219764 start.go:305] selected driver: docker
	I1018 12:12:56.663519  219764 start.go:925] validating driver "docker" against &{Name:running-upgrade-054724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:running-upgrade-054724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:12:56.663621  219764 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:12:56.664231  219764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:12:56.726441  219764 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-18 12:12:56.715805121 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:12:56.726821  219764 cni.go:84] Creating CNI manager for ""
	I1018 12:12:56.726894  219764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:12:56.726945  219764 start.go:349] cluster config:
	{Name:running-upgrade-054724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:running-upgrade-054724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:12:56.734317  219764 out.go:179] * Starting "running-upgrade-054724" primary control-plane node in "running-upgrade-054724" cluster
	I1018 12:12:56.735805  219764 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:12:56.737704  219764 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:12:56.739366  219764 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1018 12:12:56.739422  219764 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1018 12:12:56.739450  219764 cache.go:58] Caching tarball of preloaded images
	I1018 12:12:56.739521  219764 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1018 12:12:56.739571  219764 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:12:56.739584  219764 cache.go:61] Finished verifying existence of preloaded tar for v1.28.3 on crio
	I1018 12:12:56.739710  219764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/running-upgrade-054724/config.json ...
	I1018 12:12:56.764589  219764 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1018 12:12:56.764620  219764 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1018 12:12:56.764643  219764 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:12:56.764681  219764 start.go:360] acquireMachinesLock for running-upgrade-054724: {Name:mk85db4f56b1972aacbb213c12172e1e6422ebc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:12:56.764776  219764 start.go:364] duration metric: took 59.002µs to acquireMachinesLock for "running-upgrade-054724"
	I1018 12:12:56.764801  219764 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:12:56.764810  219764 fix.go:54] fixHost starting: 
	I1018 12:12:56.765089  219764 cli_runner.go:164] Run: docker container inspect running-upgrade-054724 --format={{.State.Status}}
	I1018 12:12:56.783809  219764 fix.go:112] recreateIfNeeded on running-upgrade-054724: state=Running err=<nil>
	W1018 12:12:56.783845  219764 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.805918328Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.806740366Z" level=info msg="Conmon does support the --sync option"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.806784082Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.806802851Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.807537819Z" level=info msg="Conmon does support the --sync option"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.807557409Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.814060472Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.814084532Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.814648126Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.815112033Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.815177209Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.821082382Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.864740212Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-lgchh Namespace:kube-system ID:7783eee96a1428170d0150ab797cf48616fcac8f120c38a962bcd0340cd56cd3 UID:5cf52ce4-8646-445e-a78d-f3c858ad3b42 NetNS:/var/run/netns/d1f0b44b-49c1-470d-9b9b-4462c60518a5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a530}] Aliases:map[]}"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.864963511Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-lgchh for CNI network kindnet (type=ptp)"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.866053426Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.866114644Z" level=info msg="Starting seccomp notifier watcher"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.866192753Z" level=info msg="Create NRI interface"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.86670909Z" level=info msg="built-in NRI default validator is disabled"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.866729698Z" level=info msg="runtime interface created"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.86674042Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.866746176Z" level=info msg="runtime interface starting up..."
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.866751831Z" level=info msg="starting plugins..."
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.866783476Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 18 12:12:51 pause-647824 crio[2218]: time="2025-10-18T12:12:51.867059623Z" level=info msg="No systemd watchdog enabled"
	Oct 18 12:12:51 pause-647824 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	50d28c2fe0ca4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   0                   7783eee96a142       coredns-66bc5c9577-lgchh               kube-system
	d72362279dc68       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   d18237eedf606       kindnet-m74rm                          kube-system
	7693a4b0811b4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   25 seconds ago      Running             kube-proxy                0                   2b6094a700eb0       kube-proxy-748x7                       kube-system
	923a555a15597       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   35 seconds ago      Running             etcd                      0                   9a503fd3df735       etcd-pause-647824                      kube-system
	732f379ae7344       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   35 seconds ago      Running             kube-apiserver            0                   a78e6a3a973b9       kube-apiserver-pause-647824            kube-system
	3ae69f5a45355       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   35 seconds ago      Running             kube-controller-manager   0                   e5c2dde39efd4       kube-controller-manager-pause-647824   kube-system
	da8b8098a3fa9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   35 seconds ago      Running             kube-scheduler            0                   4136d165988d3       kube-scheduler-pause-647824            kube-system
	
	
	==> coredns [50d28c2fe0ca4a46c8885f0c30960876544d95004f17a077450eb1e7cdf33f72] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59132 - 3645 "HINFO IN 5363780246764631750.6404586236968160047. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015663774s
	
	
	==> describe nodes <==
	Name:               pause-647824
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-647824
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=pause-647824
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_12_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:12:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-647824
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:12:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:12:48 +0000   Sat, 18 Oct 2025 12:12:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:12:48 +0000   Sat, 18 Oct 2025 12:12:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:12:48 +0000   Sat, 18 Oct 2025 12:12:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:12:48 +0000   Sat, 18 Oct 2025 12:12:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-647824
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                4658715e-a8fc-4a71-af59-e84a6cd0365c
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-lgchh                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-647824                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-m74rm                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-647824             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-647824    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-748x7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-647824             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node pause-647824 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node pause-647824 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node pause-647824 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node pause-647824 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node pause-647824 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node pause-647824 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node pause-647824 event: Registered Node pause-647824 in Controller
	  Normal  NodeReady                15s                kubelet          Node pause-647824 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.098201] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.055601] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.500112] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 11:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.040343] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.023874] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.023918] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.023998] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +1.023847] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +2.047856] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[  +4.031738] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[Oct18 11:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[ +16.382621] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	[ +32.253751] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 95 3b af d1 84 6a 42 c4 ce 78 31 08 00
	
	
	==> etcd [923a555a15597718b9023a79c64c33ac9a6c4ec9c0d996444416ec59f9cd75a4] <==
	{"level":"warn","ts":"2025-10-18T12:12:24.964479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:24.972659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:24.979697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:24.986396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:24.994360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.002399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.010163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.017531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.025217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.033733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.041699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.050561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.058998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.067323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.075571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.083193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.089722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.097789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.105787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.121161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.128875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.135720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:12:25.191848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40558","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:12:40.204101Z","caller":"traceutil/trace.go:172","msg":"trace[163515568] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"118.707331ms","start":"2025-10-18T12:12:40.085370Z","end":"2025-10-18T12:12:40.204078Z","steps":["trace[163515568] 'process raft request'  (duration: 118.566834ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:12:42.157962Z","caller":"traceutil/trace.go:172","msg":"trace[1347152765] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"134.046708ms","start":"2025-10-18T12:12:42.023896Z","end":"2025-10-18T12:12:42.157943Z","steps":["trace[1347152765] 'process raft request'  (duration: 133.883932ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:12:59 up 55 min,  0 user,  load average: 3.37, 2.48, 1.61
	Linux pause-647824 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d72362279dc6836f426195b211ae9ea30c1db7f56e5d8046c900d0db3968b27f] <==
	I1018 12:12:34.410499       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:12:34.410739       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 12:12:34.410897       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:12:34.410914       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:12:34.410937       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:12:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:12:34.613800       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:12:34.613892       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:12:34.613910       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:12:34.614045       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:12:34.910014       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:12:34.910223       1 metrics.go:72] Registering metrics
	I1018 12:12:34.910318       1 controller.go:711] "Syncing nftables rules"
	I1018 12:12:44.615202       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 12:12:44.615242       1 main.go:301] handling current node
	I1018 12:12:54.617954       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 12:12:54.617985       1 main.go:301] handling current node
	
	
	==> kube-apiserver [732f379ae73442a4775be491b8dd0d68a5b265bd46131897bb85a5b96b71df7a] <==
	I1018 12:12:25.719251       1 cache.go:39] Caches are synced for autoregister controller
	I1018 12:12:25.719443       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:12:25.719555       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 12:12:25.722498       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 12:12:25.725657       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:12:25.730952       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:12:25.731690       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 12:12:25.749223       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:12:26.621491       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 12:12:26.625112       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 12:12:26.625130       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:12:27.150573       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:12:27.196223       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:12:27.328627       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 12:12:27.335159       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1018 12:12:27.336313       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:12:27.341151       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:12:27.658814       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:12:28.337616       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:12:28.347576       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 12:12:28.356295       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:12:33.313561       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:12:33.318423       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:12:33.561437       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:12:33.764602       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [3ae69f5a4535529976a01a5698f297e1d11e5abeba193acd90054a0e399f2c4c] <==
	I1018 12:12:32.659078       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 12:12:32.659200       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 12:12:32.659263       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 12:12:32.659281       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:12:32.659352       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 12:12:32.659384       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 12:12:32.659205       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 12:12:32.659587       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:12:32.659685       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 12:12:32.659790       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 12:12:32.660939       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 12:12:32.661055       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 12:12:32.662226       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:12:32.662250       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 12:12:32.663265       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:12:32.663622       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 12:12:32.664025       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 12:12:32.664069       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 12:12:32.664094       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 12:12:32.664103       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 12:12:32.664107       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 12:12:32.668598       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 12:12:32.670676       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-647824" podCIDRs=["10.244.0.0/24"]
	I1018 12:12:32.680812       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:12:47.611224       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7693a4b0811b4cb7b39df033c76c5943be6a6afbf1c6499a7bd53455af88b6e3] <==
	I1018 12:12:34.195007       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:12:34.273083       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:12:34.373698       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:12:34.373784       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 12:12:34.373890       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:12:34.393103       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:12:34.393163       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:12:34.398537       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:12:34.398973       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:12:34.399000       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:12:34.402564       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:12:34.402585       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:12:34.402609       1 config.go:200] "Starting service config controller"
	I1018 12:12:34.402614       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:12:34.402639       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:12:34.402645       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:12:34.402663       1 config.go:309] "Starting node config controller"
	I1018 12:12:34.402675       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:12:34.402681       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:12:34.503671       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:12:34.503681       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:12:34.503720       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [da8b8098a3fa9bab0c1c79e4b6ad487ef813d02d3ac2b77ee770dc454179bd1c] <==
	E1018 12:12:25.683862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:12:25.683893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:12:25.683935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:12:25.683993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:12:25.684073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:12:25.684100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:12:25.684008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:12:25.684187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:12:25.684257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:12:25.684347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:12:25.684385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:12:26.530814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:12:26.562925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:12:26.568914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:12:26.575085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:12:26.596368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:12:26.791446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:12:26.795490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:12:26.832907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:12:26.840935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:12:26.852314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:12:26.862415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:12:26.883107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:12:26.954864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1018 12:12:29.680375       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:12:33 pause-647824 kubelet[1359]: I1018 12:12:33.880980    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9a7dea6c-73f6-4d37-8f57-c89b11b7f7a4-cni-cfg\") pod \"kindnet-m74rm\" (UID: \"9a7dea6c-73f6-4d37-8f57-c89b11b7f7a4\") " pod="kube-system/kindnet-m74rm"
	Oct 18 12:12:33 pause-647824 kubelet[1359]: I1018 12:12:33.881003    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a7dea6c-73f6-4d37-8f57-c89b11b7f7a4-xtables-lock\") pod \"kindnet-m74rm\" (UID: \"9a7dea6c-73f6-4d37-8f57-c89b11b7f7a4\") " pod="kube-system/kindnet-m74rm"
	Oct 18 12:12:33 pause-647824 kubelet[1359]: I1018 12:12:33.881021    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/25f830f2-5286-4c11-927d-7e766cc6fa2c-kube-proxy\") pod \"kube-proxy-748x7\" (UID: \"25f830f2-5286-4c11-927d-7e766cc6fa2c\") " pod="kube-system/kube-proxy-748x7"
	Oct 18 12:12:33 pause-647824 kubelet[1359]: I1018 12:12:33.881084    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25f830f2-5286-4c11-927d-7e766cc6fa2c-xtables-lock\") pod \"kube-proxy-748x7\" (UID: \"25f830f2-5286-4c11-927d-7e766cc6fa2c\") " pod="kube-system/kube-proxy-748x7"
	Oct 18 12:12:33 pause-647824 kubelet[1359]: I1018 12:12:33.881156    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jv4q\" (UniqueName: \"kubernetes.io/projected/25f830f2-5286-4c11-927d-7e766cc6fa2c-kube-api-access-7jv4q\") pod \"kube-proxy-748x7\" (UID: \"25f830f2-5286-4c11-927d-7e766cc6fa2c\") " pod="kube-system/kube-proxy-748x7"
	Oct 18 12:12:33 pause-647824 kubelet[1359]: I1018 12:12:33.881185    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a7dea6c-73f6-4d37-8f57-c89b11b7f7a4-lib-modules\") pod \"kindnet-m74rm\" (UID: \"9a7dea6c-73f6-4d37-8f57-c89b11b7f7a4\") " pod="kube-system/kindnet-m74rm"
	Oct 18 12:12:33 pause-647824 kubelet[1359]: I1018 12:12:33.881207    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr6gk\" (UniqueName: \"kubernetes.io/projected/9a7dea6c-73f6-4d37-8f57-c89b11b7f7a4-kube-api-access-gr6gk\") pod \"kindnet-m74rm\" (UID: \"9a7dea6c-73f6-4d37-8f57-c89b11b7f7a4\") " pod="kube-system/kindnet-m74rm"
	Oct 18 12:12:34 pause-647824 kubelet[1359]: I1018 12:12:34.227181    1359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-m74rm" podStartSLOduration=1.227155944 podStartE2EDuration="1.227155944s" podCreationTimestamp="2025-10-18 12:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:12:34.205461939 +0000 UTC m=+6.128400864" watchObservedRunningTime="2025-10-18 12:12:34.227155944 +0000 UTC m=+6.150094852"
	Oct 18 12:12:38 pause-647824 kubelet[1359]: I1018 12:12:38.345712    1359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-748x7" podStartSLOduration=5.345683201 podStartE2EDuration="5.345683201s" podCreationTimestamp="2025-10-18 12:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:12:34.227079512 +0000 UTC m=+6.150018420" watchObservedRunningTime="2025-10-18 12:12:38.345683201 +0000 UTC m=+10.268622125"
	Oct 18 12:12:44 pause-647824 kubelet[1359]: I1018 12:12:44.961116    1359 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 12:12:45 pause-647824 kubelet[1359]: I1018 12:12:45.065354    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5cf52ce4-8646-445e-a78d-f3c858ad3b42-config-volume\") pod \"coredns-66bc5c9577-lgchh\" (UID: \"5cf52ce4-8646-445e-a78d-f3c858ad3b42\") " pod="kube-system/coredns-66bc5c9577-lgchh"
	Oct 18 12:12:45 pause-647824 kubelet[1359]: I1018 12:12:45.065440    1359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqqtc\" (UniqueName: \"kubernetes.io/projected/5cf52ce4-8646-445e-a78d-f3c858ad3b42-kube-api-access-mqqtc\") pod \"coredns-66bc5c9577-lgchh\" (UID: \"5cf52ce4-8646-445e-a78d-f3c858ad3b42\") " pod="kube-system/coredns-66bc5c9577-lgchh"
	Oct 18 12:12:46 pause-647824 kubelet[1359]: I1018 12:12:46.231687    1359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lgchh" podStartSLOduration=13.23166187 podStartE2EDuration="13.23166187s" podCreationTimestamp="2025-10-18 12:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:12:46.231501121 +0000 UTC m=+18.154440029" watchObservedRunningTime="2025-10-18 12:12:46.23166187 +0000 UTC m=+18.154600780"
	Oct 18 12:12:50 pause-647824 kubelet[1359]: W1018 12:12:50.165742    1359 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 18 12:12:50 pause-647824 kubelet[1359]: E1018 12:12:50.166332    1359 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Oct 18 12:12:50 pause-647824 kubelet[1359]: E1018 12:12:50.166435    1359 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 18 12:12:50 pause-647824 kubelet[1359]: E1018 12:12:50.166452    1359 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 18 12:12:50 pause-647824 kubelet[1359]: E1018 12:12:50.166464    1359 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 18 12:12:50 pause-647824 kubelet[1359]: E1018 12:12:50.226310    1359 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 18 12:12:50 pause-647824 kubelet[1359]: E1018 12:12:50.226360    1359 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 18 12:12:50 pause-647824 kubelet[1359]: E1018 12:12:50.226374    1359 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 18 12:12:55 pause-647824 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 12:12:55 pause-647824 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 12:12:55 pause-647824 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 12:12:55 pause-647824 systemd[1]: kubelet.service: Consumed 1.222s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-647824 -n pause-647824
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-647824 -n pause-647824: exit status 2 (342.752172ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-647824 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-024443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-024443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (236.812015ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:17:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-024443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-024443 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-024443 describe deploy/metrics-server -n kube-system: exit status 1 (69.937657ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-024443 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-024443
helpers_test.go:243: (dbg) docker inspect old-k8s-version-024443:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570",
	        "Created": "2025-10-18T12:16:27.110733205Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 285565,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:16:27.15817303Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570/hosts",
	        "LogPath": "/var/lib/docker/containers/9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570/9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570-json.log",
	        "Name": "/old-k8s-version-024443",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-024443:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-024443",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570",
	                "LowerDir": "/var/lib/docker/overlay2/7cecfc4c0113fa8f9c857128b1d2593c3e1dff65b374e90a3423a5349a0fc7ff-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7cecfc4c0113fa8f9c857128b1d2593c3e1dff65b374e90a3423a5349a0fc7ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7cecfc4c0113fa8f9c857128b1d2593c3e1dff65b374e90a3423a5349a0fc7ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7cecfc4c0113fa8f9c857128b1d2593c3e1dff65b374e90a3423a5349a0fc7ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-024443",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-024443/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-024443",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-024443",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-024443",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a66cd736bee437e1152042d2324f6ecbaaea6ad5c21bd5fc13f4595b59f78508",
	            "SandboxKey": "/var/run/docker/netns/a66cd736bee4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-024443": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:bd:8c:6f:e0:b5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "704be5e99155d09cbf122649ccef6bb6653fc58dfc14bb6d440e5291162e7e3c",
	                    "EndpointID": "7390fdba97c3fb9e3d646bc986e216bec30bc0f2f139a8e60c4debd72d80f048",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-024443",
	                        "9b192bc9f9a7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-024443 -n old-k8s-version-024443
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-024443 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-024443 logs -n 25: (1.039144368s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-376567 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                    │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                   │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ ssh     │ -p bridge-376567 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ ssh     │ -p bridge-376567 sudo docker system info                                                                                                                                 │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ ssh     │ -p bridge-376567 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ ssh     │ -p bridge-376567 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ ssh     │ -p bridge-376567 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cri-dockerd --version                                                                                                                              │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ ssh     │ -p bridge-376567 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo containerd config dump                                                                                                                             │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo crio config                                                                                                                                        │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ delete  │ -p bridge-376567                                                                                                                                                         │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ delete  │ -p disable-driver-mounts-200198                                                                                                                                          │ disable-driver-mounts-200198 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-024443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:17:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:17:09.989378  303392 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:17:09.989603  303392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:17:09.989610  303392 out.go:374] Setting ErrFile to fd 2...
	I1018 12:17:09.989615  303392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:17:09.989923  303392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:17:09.990416  303392 out.go:368] Setting JSON to false
	I1018 12:17:09.991870  303392 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3578,"bootTime":1760786252,"procs":395,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:17:09.991983  303392 start.go:141] virtualization: kvm guest
	I1018 12:17:09.994556  303392 out.go:179] * [default-k8s-diff-port-028309] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:17:09.996134  303392 notify.go:220] Checking for updates...
	I1018 12:17:09.996189  303392 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:17:09.997726  303392 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:17:09.999143  303392 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:17:10.000462  303392 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:17:10.001920  303392 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:17:10.003352  303392 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:17:10.004974  303392 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:17:10.005114  303392 config.go:182] Loaded profile config "no-preload-406541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:17:10.005250  303392 config.go:182] Loaded profile config "old-k8s-version-024443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 12:17:10.005400  303392 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:17:10.030342  303392 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:17:10.030426  303392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:17:10.097435  303392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:17:10.084190507 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:17:10.097535  303392 docker.go:318] overlay module found
	I1018 12:17:10.098905  303392 out.go:179] * Using the docker driver based on user configuration
	I1018 12:17:10.100491  303392 start.go:305] selected driver: docker
	I1018 12:17:10.100527  303392 start.go:925] validating driver "docker" against <nil>
	I1018 12:17:10.100543  303392 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:17:10.101335  303392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:17:10.178495  303392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:17:10.16872536 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:17:10.178723  303392 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 12:17:10.179048  303392 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:10.180927  303392 out.go:179] * Using Docker driver with root privileges
	I1018 12:17:10.182188  303392 cni.go:84] Creating CNI manager for ""
	I1018 12:17:10.182255  303392 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:17:10.182266  303392 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:17:10.182339  303392 start.go:349] cluster config:
	{Name:default-k8s-diff-port-028309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-028309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:17:10.183812  303392 out.go:179] * Starting "default-k8s-diff-port-028309" primary control-plane node in "default-k8s-diff-port-028309" cluster
	I1018 12:17:10.185119  303392 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:17:10.186484  303392 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:17:10.187909  303392 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:17:10.187946  303392 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:17:10.187954  303392 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 12:17:10.187983  303392 cache.go:58] Caching tarball of preloaded images
	I1018 12:17:10.188065  303392 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:17:10.188075  303392 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:17:10.188150  303392 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/config.json ...
	I1018 12:17:10.188169  303392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/config.json: {Name:mk0a7583c0b13847b99f7e6327a163d03ca928e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:10.208446  303392 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:17:10.208469  303392 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:17:10.208484  303392 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:17:10.208516  303392 start.go:360] acquireMachinesLock for default-k8s-diff-port-028309: {Name:mk2adb3e724bc0ee6357d7bccded98e7948efa53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:10.208604  303392 start.go:364] duration metric: took 73.641µs to acquireMachinesLock for "default-k8s-diff-port-028309"
	I1018 12:17:10.208627  303392 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-028309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-028309 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:17:10.208677  303392 start.go:125] createHost starting for "" (driver="docker")
	I1018 12:17:05.934529  295702 out.go:252]   - Booting up control plane ...
	I1018 12:17:05.934661  295702 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:17:05.934791  295702 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:17:05.934878  295702 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:17:05.952629  295702 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:17:05.953293  295702 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:17:05.961996  295702 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:17:05.962324  295702 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:17:05.962398  295702 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:17:06.071804  295702 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:17:06.071988  295702 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:17:07.573949  295702 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501638356s
	I1018 12:17:07.578334  295702 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:17:07.578454  295702 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 12:17:07.578569  295702 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:17:07.578705  295702 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:17:09.709217  295702 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.130843742s
	I1018 12:17:09.915172  295702 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.336880091s
	W1018 12:17:08.287122  284229 node_ready.go:57] node "old-k8s-version-024443" has "Ready":"False" status (will retry)
	I1018 12:17:09.287337  284229 node_ready.go:49] node "old-k8s-version-024443" is "Ready"
	I1018 12:17:09.287370  284229 node_ready.go:38] duration metric: took 12.503640215s for node "old-k8s-version-024443" to be "Ready" ...
	I1018 12:17:09.287387  284229 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:17:09.287439  284229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:17:09.303391  284229 api_server.go:72] duration metric: took 13.348683953s to wait for apiserver process to appear ...
	I1018 12:17:09.303420  284229 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:17:09.303566  284229 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:17:09.309885  284229 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 12:17:09.311661  284229 api_server.go:141] control plane version: v1.28.0
	I1018 12:17:09.311687  284229 api_server.go:131] duration metric: took 8.260308ms to wait for apiserver health ...
	I1018 12:17:09.311697  284229 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:17:09.315954  284229 system_pods.go:59] 8 kube-system pods found
	I1018 12:17:09.315990  284229 system_pods.go:61] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:09.315998  284229 system_pods.go:61] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running
	I1018 12:17:09.316006  284229 system_pods.go:61] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:09.316011  284229 system_pods.go:61] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running
	I1018 12:17:09.316018  284229 system_pods.go:61] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running
	I1018 12:17:09.316023  284229 system_pods.go:61] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:09.316028  284229 system_pods.go:61] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running
	I1018 12:17:09.316035  284229 system_pods.go:61] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:09.316044  284229 system_pods.go:74] duration metric: took 4.340144ms to wait for pod list to return data ...
	I1018 12:17:09.316057  284229 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:17:09.318622  284229 default_sa.go:45] found service account: "default"
	I1018 12:17:09.318644  284229 default_sa.go:55] duration metric: took 2.580433ms for default service account to be created ...
	I1018 12:17:09.318654  284229 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:17:09.322568  284229 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:09.322607  284229 system_pods.go:89] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:09.322616  284229 system_pods.go:89] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running
	I1018 12:17:09.322626  284229 system_pods.go:89] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:09.322631  284229 system_pods.go:89] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running
	I1018 12:17:09.322637  284229 system_pods.go:89] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running
	I1018 12:17:09.322643  284229 system_pods.go:89] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:09.322652  284229 system_pods.go:89] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running
	I1018 12:17:09.322659  284229 system_pods.go:89] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:09.322688  284229 retry.go:31] will retry after 255.110485ms: missing components: kube-dns
	I1018 12:17:09.585508  284229 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:09.585549  284229 system_pods.go:89] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:09.585562  284229 system_pods.go:89] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running
	I1018 12:17:09.585571  284229 system_pods.go:89] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:09.585577  284229 system_pods.go:89] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running
	I1018 12:17:09.585583  284229 system_pods.go:89] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running
	I1018 12:17:09.585588  284229 system_pods.go:89] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:09.585596  284229 system_pods.go:89] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running
	I1018 12:17:09.585603  284229 system_pods.go:89] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:09.585623  284229 retry.go:31] will retry after 295.668626ms: missing components: kube-dns
	I1018 12:17:09.889287  284229 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:09.889322  284229 system_pods.go:89] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:09.889332  284229 system_pods.go:89] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running
	I1018 12:17:09.889401  284229 system_pods.go:89] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:09.889409  284229 system_pods.go:89] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running
	I1018 12:17:09.889456  284229 system_pods.go:89] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running
	I1018 12:17:09.889462  284229 system_pods.go:89] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:09.889467  284229 system_pods.go:89] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running
	I1018 12:17:09.889472  284229 system_pods.go:89] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Running
	I1018 12:17:09.889491  284229 retry.go:31] will retry after 391.466411ms: missing components: kube-dns
	I1018 12:17:10.285621  284229 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:10.285657  284229 system_pods.go:89] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:10.285664  284229 system_pods.go:89] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running
	I1018 12:17:10.285672  284229 system_pods.go:89] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:10.285678  284229 system_pods.go:89] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running
	I1018 12:17:10.285684  284229 system_pods.go:89] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running
	I1018 12:17:10.285689  284229 system_pods.go:89] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:10.285695  284229 system_pods.go:89] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running
	I1018 12:17:10.285700  284229 system_pods.go:89] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Running
	I1018 12:17:10.285721  284229 retry.go:31] will retry after 502.967549ms: missing components: kube-dns
	I1018 12:17:10.793348  284229 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:10.793384  284229 system_pods.go:89] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:10.793391  284229 system_pods.go:89] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running
	I1018 12:17:10.793397  284229 system_pods.go:89] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:10.793404  284229 system_pods.go:89] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running
	I1018 12:17:10.793410  284229 system_pods.go:89] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running
	I1018 12:17:10.793416  284229 system_pods.go:89] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:10.793421  284229 system_pods.go:89] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running
	I1018 12:17:10.793430  284229 system_pods.go:89] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Running
	I1018 12:17:10.793448  284229 retry.go:31] will retry after 680.741844ms: missing components: kube-dns
	I1018 12:17:11.580325  295702 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.00195535s
	I1018 12:17:11.594486  295702 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:17:11.606936  295702 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:17:11.619839  295702 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:17:11.620244  295702 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-175371 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:17:11.628956  295702 kubeadm.go:318] [bootstrap-token] Using token: s0eyel.sxikqwsssyd1yq10
	I1018 12:17:11.630435  295702 out.go:252]   - Configuring RBAC rules ...
	I1018 12:17:11.630592  295702 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 12:17:11.634025  295702 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 12:17:11.643366  295702 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 12:17:11.646654  295702 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 12:17:11.649593  295702 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 12:17:11.652274  295702 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 12:17:11.988043  295702 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 12:17:12.418439  295702 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 12:17:12.986955  295702 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 12:17:12.987871  295702 kubeadm.go:318] 
	I1018 12:17:12.987931  295702 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 12:17:12.987938  295702 kubeadm.go:318] 
	I1018 12:17:12.988029  295702 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 12:17:12.988039  295702 kubeadm.go:318] 
	I1018 12:17:12.988084  295702 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 12:17:12.988144  295702 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 12:17:12.988273  295702 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 12:17:12.988292  295702 kubeadm.go:318] 
	I1018 12:17:12.988352  295702 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 12:17:12.988360  295702 kubeadm.go:318] 
	I1018 12:17:12.988414  295702 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 12:17:12.988422  295702 kubeadm.go:318] 
	I1018 12:17:12.988486  295702 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 12:17:12.988571  295702 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 12:17:12.988653  295702 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 12:17:12.988670  295702 kubeadm.go:318] 
	I1018 12:17:12.988820  295702 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 12:17:12.988927  295702 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 12:17:12.988937  295702 kubeadm.go:318] 
	I1018 12:17:12.989070  295702 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token s0eyel.sxikqwsssyd1yq10 \
	I1018 12:17:12.989196  295702 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d \
	I1018 12:17:12.989233  295702 kubeadm.go:318] 	--control-plane 
	I1018 12:17:12.989246  295702 kubeadm.go:318] 
	I1018 12:17:12.989361  295702 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 12:17:12.989374  295702 kubeadm.go:318] 
	I1018 12:17:12.989481  295702 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token s0eyel.sxikqwsssyd1yq10 \
	I1018 12:17:12.989615  295702 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d 
	I1018 12:17:12.992521  295702 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 12:17:12.992707  295702 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 12:17:12.992731  295702 cni.go:84] Creating CNI manager for ""
	I1018 12:17:12.992741  295702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:17:12.996653  295702 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 12:17:11.479364  284229 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:11.479395  284229 system_pods.go:89] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:11.479401  284229 system_pods.go:89] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running
	I1018 12:17:11.479407  284229 system_pods.go:89] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:11.479410  284229 system_pods.go:89] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running
	I1018 12:17:11.479414  284229 system_pods.go:89] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running
	I1018 12:17:11.479423  284229 system_pods.go:89] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:11.479427  284229 system_pods.go:89] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running
	I1018 12:17:11.479430  284229 system_pods.go:89] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Running
	I1018 12:17:11.479444  284229 retry.go:31] will retry after 842.277236ms: missing components: kube-dns
	I1018 12:17:12.326663  284229 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:12.326690  284229 system_pods.go:89] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Running
	I1018 12:17:12.326696  284229 system_pods.go:89] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running
	I1018 12:17:12.326699  284229 system_pods.go:89] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:12.326702  284229 system_pods.go:89] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running
	I1018 12:17:12.326706  284229 system_pods.go:89] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running
	I1018 12:17:12.326709  284229 system_pods.go:89] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:12.326712  284229 system_pods.go:89] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running
	I1018 12:17:12.326714  284229 system_pods.go:89] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Running
	I1018 12:17:12.326722  284229 system_pods.go:126] duration metric: took 3.0080623s to wait for k8s-apps to be running ...
	I1018 12:17:12.326742  284229 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:17:12.326805  284229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:17:12.341688  284229 system_svc.go:56] duration metric: took 14.934271ms WaitForService to wait for kubelet
	I1018 12:17:12.341736  284229 kubeadm.go:586] duration metric: took 16.387033243s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:12.341772  284229 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:17:12.344633  284229 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:17:12.344659  284229 node_conditions.go:123] node cpu capacity is 8
	I1018 12:17:12.344672  284229 node_conditions.go:105] duration metric: took 2.893864ms to run NodePressure ...
	I1018 12:17:12.344682  284229 start.go:241] waiting for startup goroutines ...
	I1018 12:17:12.344689  284229 start.go:246] waiting for cluster config update ...
	I1018 12:17:12.344698  284229 start.go:255] writing updated cluster config ...
	I1018 12:17:12.345000  284229 ssh_runner.go:195] Run: rm -f paused
	I1018 12:17:12.349094  284229 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:17:12.354126  284229 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-s4wnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:12.359940  284229 pod_ready.go:94] pod "coredns-5dd5756b68-s4wnq" is "Ready"
	I1018 12:17:12.359973  284229 pod_ready.go:86] duration metric: took 5.816686ms for pod "coredns-5dd5756b68-s4wnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:12.363596  284229 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:12.368832  284229 pod_ready.go:94] pod "etcd-old-k8s-version-024443" is "Ready"
	I1018 12:17:12.368858  284229 pod_ready.go:86] duration metric: took 5.237265ms for pod "etcd-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:12.377223  284229 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:12.387405  284229 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-024443" is "Ready"
	I1018 12:17:12.387437  284229 pod_ready.go:86] duration metric: took 10.185515ms for pod "kube-apiserver-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:12.394408  284229 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:12.753723  284229 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-024443" is "Ready"
	I1018 12:17:12.753751  284229 pod_ready.go:86] duration metric: took 359.309074ms for pod "kube-controller-manager-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:12.954388  284229 pod_ready.go:83] waiting for pod "kube-proxy-tzlpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:13.353537  284229 pod_ready.go:94] pod "kube-proxy-tzlpd" is "Ready"
	I1018 12:17:13.353563  284229 pod_ready.go:86] duration metric: took 399.15221ms for pod "kube-proxy-tzlpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:13.554517  284229 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:13.953343  284229 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-024443" is "Ready"
	I1018 12:17:13.953372  284229 pod_ready.go:86] duration metric: took 398.824901ms for pod "kube-scheduler-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:13.953386  284229 pod_ready.go:40] duration metric: took 1.604257018s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:17:14.000846  284229 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	W1018 12:17:11.297656  284991 node_ready.go:57] node "no-preload-406541" has "Ready":"False" status (will retry)
	W1018 12:17:13.307149  284991 node_ready.go:57] node "no-preload-406541" has "Ready":"False" status (will retry)
	I1018 12:17:14.084909  284229 out.go:203] 
	W1018 12:17:14.120594  284229 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 12:17:14.162086  284229 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 12:17:14.307271  284229 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-024443" cluster and "default" namespace by default
	I1018 12:17:10.210809  303392 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 12:17:10.211061  303392 start.go:159] libmachine.API.Create for "default-k8s-diff-port-028309" (driver="docker")
	I1018 12:17:10.211096  303392 client.go:168] LocalClient.Create starting
	I1018 12:17:10.211197  303392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem
	I1018 12:17:10.211253  303392 main.go:141] libmachine: Decoding PEM data...
	I1018 12:17:10.211271  303392 main.go:141] libmachine: Parsing certificate...
	I1018 12:17:10.211332  303392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem
	I1018 12:17:10.211353  303392 main.go:141] libmachine: Decoding PEM data...
	I1018 12:17:10.211371  303392 main.go:141] libmachine: Parsing certificate...
	I1018 12:17:10.211699  303392 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-028309 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 12:17:10.230582  303392 cli_runner.go:211] docker network inspect default-k8s-diff-port-028309 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 12:17:10.230656  303392 network_create.go:284] running [docker network inspect default-k8s-diff-port-028309] to gather additional debugging logs...
	I1018 12:17:10.230674  303392 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-028309
	W1018 12:17:10.248645  303392 cli_runner.go:211] docker network inspect default-k8s-diff-port-028309 returned with exit code 1
	I1018 12:17:10.248679  303392 network_create.go:287] error running [docker network inspect default-k8s-diff-port-028309]: docker network inspect default-k8s-diff-port-028309: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-028309 not found
	I1018 12:17:10.248696  303392 network_create.go:289] output of [docker network inspect default-k8s-diff-port-028309]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-028309 not found
	
	** /stderr **
	I1018 12:17:10.248852  303392 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:17:10.267437  303392 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1c78aef7d2ee IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:19:5a:10:36:f4} reservation:<nil>}
	I1018 12:17:10.268053  303392 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6069a4ec9777 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:f7:2a:6b:48:b9} reservation:<nil>}
	I1018 12:17:10.268754  303392 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-670e794a7c9f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:d0:78:df:c7:fd} reservation:<nil>}
	I1018 12:17:10.269394  303392 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8bb34d522296 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:fc:1a:65:23:03} reservation:<nil>}
	I1018 12:17:10.269923  303392 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-704be5e99155 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:26:69:ed:e3:bb:73} reservation:<nil>}
	I1018 12:17:10.270995  303392 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-dc7610ce5456 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:b6:7c:0a:6d:c2:9c} reservation:<nil>}
	I1018 12:17:10.272601  303392 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed5210}
	I1018 12:17:10.272633  303392 network_create.go:124] attempt to create docker network default-k8s-diff-port-028309 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1018 12:17:10.272685  303392 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-028309 default-k8s-diff-port-028309
	I1018 12:17:10.333924  303392 network_create.go:108] docker network default-k8s-diff-port-028309 192.168.103.0/24 created
	I1018 12:17:10.333952  303392 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-028309" container
	I1018 12:17:10.334071  303392 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 12:17:10.351696  303392 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-028309 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-028309 --label created_by.minikube.sigs.k8s.io=true
	I1018 12:17:10.370496  303392 oci.go:103] Successfully created a docker volume default-k8s-diff-port-028309
	I1018 12:17:10.370599  303392 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-028309-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-028309 --entrypoint /usr/bin/test -v default-k8s-diff-port-028309:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 12:17:10.766141  303392 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-028309
	I1018 12:17:10.766175  303392 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:17:10.766195  303392 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 12:17:10.766251  303392 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-028309:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 12:17:12.998079  295702 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 12:17:13.003370  295702 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 12:17:13.003388  295702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 12:17:13.017136  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 12:17:13.262082  295702 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:17:13.262262  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-175371 minikube.k8s.io/updated_at=2025_10_18T12_17_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=embed-certs-175371 minikube.k8s.io/primary=true
	I1018 12:17:13.262420  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:13.275244  295702 ops.go:34] apiserver oom_adj: -16
	I1018 12:17:13.576589  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:14.076753  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:14.577362  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:15.076879  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:15.576880  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:16.076879  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:16.576927  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:17.076975  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:17.577462  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:18.077589  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:18.169822  295702 kubeadm.go:1113] duration metric: took 4.907730706s to wait for elevateKubeSystemPrivileges
	I1018 12:17:18.169943  295702 kubeadm.go:402] duration metric: took 15.899918067s to StartCluster
	I1018 12:17:18.169982  295702 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:18.170092  295702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:17:18.172421  295702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:18.172713  295702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 12:17:18.172723  295702 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:17:18.172836  295702 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:17:18.172920  295702 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-175371"
	I1018 12:17:18.172939  295702 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-175371"
	I1018 12:17:18.172969  295702 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:17:18.172982  295702 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:17:18.173071  295702 addons.go:69] Setting default-storageclass=true in profile "embed-certs-175371"
	I1018 12:17:18.173091  295702 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-175371"
	I1018 12:17:18.173465  295702 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:17:18.174383  295702 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:17:18.177470  295702 out.go:179] * Verifying Kubernetes components...
	I1018 12:17:18.179118  295702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:18.203275  295702 addons.go:238] Setting addon default-storageclass=true in "embed-certs-175371"
	I1018 12:17:18.203323  295702 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:17:18.203854  295702 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:17:18.203998  295702 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:17:18.205863  295702 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:17:18.205894  295702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:17:18.205953  295702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:17:18.234520  295702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:17:18.237786  295702 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:17:18.237809  295702 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:17:18.237882  295702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:17:18.263799  295702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:17:18.283808  295702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 12:17:18.353452  295702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:17:18.360988  295702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:17:18.385433  295702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:17:18.481027  295702 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 12:17:18.482217  295702 node_ready.go:35] waiting up to 6m0s for node "embed-certs-175371" to be "Ready" ...
	I1018 12:17:18.712676  295702 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1018 12:17:15.796454  284991 node_ready.go:57] node "no-preload-406541" has "Ready":"False" status (will retry)
	I1018 12:17:17.297044  284991 node_ready.go:49] node "no-preload-406541" is "Ready"
	I1018 12:17:17.297072  284991 node_ready.go:38] duration metric: took 12.503291692s for node "no-preload-406541" to be "Ready" ...
	I1018 12:17:17.297084  284991 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:17:17.297128  284991 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:17:17.309995  284991 api_server.go:72] duration metric: took 12.944612407s to wait for apiserver process to appear ...
	I1018 12:17:17.310026  284991 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:17:17.310046  284991 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 12:17:17.314280  284991 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1018 12:17:17.315126  284991 api_server.go:141] control plane version: v1.34.1
	I1018 12:17:17.315146  284991 api_server.go:131] duration metric: took 5.114723ms to wait for apiserver health ...
	I1018 12:17:17.315154  284991 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:17:17.319212  284991 system_pods.go:59] 8 kube-system pods found
	I1018 12:17:17.319248  284991 system_pods.go:61] "coredns-66bc5c9577-bwvrq" [eee9c519-7100-41a0-8a95-6daae8b6b46b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:17.319255  284991 system_pods.go:61] "etcd-no-preload-406541" [32415a7e-882e-4c2f-b369-3841d4c57482] Running
	I1018 12:17:17.319261  284991 system_pods.go:61] "kindnet-dwg7c" [d2ecaa2c-b1fd-4635-8521-39461256e9ec] Running
	I1018 12:17:17.319274  284991 system_pods.go:61] "kube-apiserver-no-preload-406541" [179f86d1-c11f-42fb-821a-a7c4877492d3] Running
	I1018 12:17:17.319282  284991 system_pods.go:61] "kube-controller-manager-no-preload-406541" [092fc484-967e-4890-aa37-e52f994dfb9e] Running
	I1018 12:17:17.319286  284991 system_pods.go:61] "kube-proxy-9vbmr" [396c662e-9914-4ffe-a26e-4fff6e123577] Running
	I1018 12:17:17.319289  284991 system_pods.go:61] "kube-scheduler-no-preload-406541" [08ef79d5-dedd-4034-8278-ddd13a8a6dbd] Running
	I1018 12:17:17.319294  284991 system_pods.go:61] "storage-provisioner" [7c61b5da-ef85-46ff-a054-051967cf9d79] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:17.319302  284991 system_pods.go:74] duration metric: took 4.14335ms to wait for pod list to return data ...
	I1018 12:17:17.319309  284991 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:17:17.321902  284991 default_sa.go:45] found service account: "default"
	I1018 12:17:17.321920  284991 default_sa.go:55] duration metric: took 2.606649ms for default service account to be created ...
	I1018 12:17:17.321928  284991 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:17:17.324418  284991 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:17.324440  284991 system_pods.go:89] "coredns-66bc5c9577-bwvrq" [eee9c519-7100-41a0-8a95-6daae8b6b46b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:17.324448  284991 system_pods.go:89] "etcd-no-preload-406541" [32415a7e-882e-4c2f-b369-3841d4c57482] Running
	I1018 12:17:17.324458  284991 system_pods.go:89] "kindnet-dwg7c" [d2ecaa2c-b1fd-4635-8521-39461256e9ec] Running
	I1018 12:17:17.324464  284991 system_pods.go:89] "kube-apiserver-no-preload-406541" [179f86d1-c11f-42fb-821a-a7c4877492d3] Running
	I1018 12:17:17.324471  284991 system_pods.go:89] "kube-controller-manager-no-preload-406541" [092fc484-967e-4890-aa37-e52f994dfb9e] Running
	I1018 12:17:17.324488  284991 system_pods.go:89] "kube-proxy-9vbmr" [396c662e-9914-4ffe-a26e-4fff6e123577] Running
	I1018 12:17:17.324493  284991 system_pods.go:89] "kube-scheduler-no-preload-406541" [08ef79d5-dedd-4034-8278-ddd13a8a6dbd] Running
	I1018 12:17:17.324500  284991 system_pods.go:89] "storage-provisioner" [7c61b5da-ef85-46ff-a054-051967cf9d79] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:17.324522  284991 retry.go:31] will retry after 270.937375ms: missing components: kube-dns
	I1018 12:17:17.600079  284991 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:17.600111  284991 system_pods.go:89] "coredns-66bc5c9577-bwvrq" [eee9c519-7100-41a0-8a95-6daae8b6b46b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:17.600118  284991 system_pods.go:89] "etcd-no-preload-406541" [32415a7e-882e-4c2f-b369-3841d4c57482] Running
	I1018 12:17:17.600125  284991 system_pods.go:89] "kindnet-dwg7c" [d2ecaa2c-b1fd-4635-8521-39461256e9ec] Running
	I1018 12:17:17.600129  284991 system_pods.go:89] "kube-apiserver-no-preload-406541" [179f86d1-c11f-42fb-821a-a7c4877492d3] Running
	I1018 12:17:17.600132  284991 system_pods.go:89] "kube-controller-manager-no-preload-406541" [092fc484-967e-4890-aa37-e52f994dfb9e] Running
	I1018 12:17:17.600135  284991 system_pods.go:89] "kube-proxy-9vbmr" [396c662e-9914-4ffe-a26e-4fff6e123577] Running
	I1018 12:17:17.600139  284991 system_pods.go:89] "kube-scheduler-no-preload-406541" [08ef79d5-dedd-4034-8278-ddd13a8a6dbd] Running
	I1018 12:17:17.600144  284991 system_pods.go:89] "storage-provisioner" [7c61b5da-ef85-46ff-a054-051967cf9d79] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:17.600157  284991 retry.go:31] will retry after 359.077664ms: missing components: kube-dns
	I1018 12:17:17.963458  284991 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:17.963491  284991 system_pods.go:89] "coredns-66bc5c9577-bwvrq" [eee9c519-7100-41a0-8a95-6daae8b6b46b] Running
	I1018 12:17:17.963500  284991 system_pods.go:89] "etcd-no-preload-406541" [32415a7e-882e-4c2f-b369-3841d4c57482] Running
	I1018 12:17:17.963505  284991 system_pods.go:89] "kindnet-dwg7c" [d2ecaa2c-b1fd-4635-8521-39461256e9ec] Running
	I1018 12:17:17.963510  284991 system_pods.go:89] "kube-apiserver-no-preload-406541" [179f86d1-c11f-42fb-821a-a7c4877492d3] Running
	I1018 12:17:17.963516  284991 system_pods.go:89] "kube-controller-manager-no-preload-406541" [092fc484-967e-4890-aa37-e52f994dfb9e] Running
	I1018 12:17:17.963521  284991 system_pods.go:89] "kube-proxy-9vbmr" [396c662e-9914-4ffe-a26e-4fff6e123577] Running
	I1018 12:17:17.963526  284991 system_pods.go:89] "kube-scheduler-no-preload-406541" [08ef79d5-dedd-4034-8278-ddd13a8a6dbd] Running
	I1018 12:17:17.963532  284991 system_pods.go:89] "storage-provisioner" [7c61b5da-ef85-46ff-a054-051967cf9d79] Running
	I1018 12:17:17.963543  284991 system_pods.go:126] duration metric: took 641.608816ms to wait for k8s-apps to be running ...
	I1018 12:17:17.963558  284991 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:17:17.963606  284991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:17:17.980464  284991 system_svc.go:56] duration metric: took 16.897132ms WaitForService to wait for kubelet
	I1018 12:17:17.980496  284991 kubeadm.go:586] duration metric: took 13.615118006s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:17.980520  284991 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:17:17.983782  284991 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:17:17.983813  284991 node_conditions.go:123] node cpu capacity is 8
	I1018 12:17:17.983830  284991 node_conditions.go:105] duration metric: took 3.303337ms to run NodePressure ...
	I1018 12:17:17.983845  284991 start.go:241] waiting for startup goroutines ...
	I1018 12:17:17.983859  284991 start.go:246] waiting for cluster config update ...
	I1018 12:17:17.983875  284991 start.go:255] writing updated cluster config ...
	I1018 12:17:17.984155  284991 ssh_runner.go:195] Run: rm -f paused
	I1018 12:17:17.988902  284991 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:17:17.992701  284991 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bwvrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:17.997229  284991 pod_ready.go:94] pod "coredns-66bc5c9577-bwvrq" is "Ready"
	I1018 12:17:17.997250  284991 pod_ready.go:86] duration metric: took 4.522372ms for pod "coredns-66bc5c9577-bwvrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:17.999467  284991 pod_ready.go:83] waiting for pod "etcd-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:18.003331  284991 pod_ready.go:94] pod "etcd-no-preload-406541" is "Ready"
	I1018 12:17:18.003351  284991 pod_ready.go:86] duration metric: took 3.86318ms for pod "etcd-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:18.005221  284991 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:18.008960  284991 pod_ready.go:94] pod "kube-apiserver-no-preload-406541" is "Ready"
	I1018 12:17:18.008978  284991 pod_ready.go:86] duration metric: took 3.740672ms for pod "kube-apiserver-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:18.010873  284991 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:18.394228  284991 pod_ready.go:94] pod "kube-controller-manager-no-preload-406541" is "Ready"
	I1018 12:17:18.394253  284991 pod_ready.go:86] duration metric: took 383.353644ms for pod "kube-controller-manager-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:18.593712  284991 pod_ready.go:83] waiting for pod "kube-proxy-9vbmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:18.992879  284991 pod_ready.go:94] pod "kube-proxy-9vbmr" is "Ready"
	I1018 12:17:18.992904  284991 pod_ready.go:86] duration metric: took 399.166244ms for pod "kube-proxy-9vbmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:15.497742  303392 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-028309:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.7314372s)
	I1018 12:17:15.497791  303392 kic.go:203] duration metric: took 4.731592001s to extract preloaded images to volume ...
	W1018 12:17:15.497875  303392 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 12:17:15.497913  303392 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 12:17:15.497958  303392 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 12:17:15.554503  303392 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-028309 --name default-k8s-diff-port-028309 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-028309 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-028309 --network default-k8s-diff-port-028309 --ip 192.168.103.2 --volume default-k8s-diff-port-028309:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 12:17:15.848403  303392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-028309 --format={{.State.Running}}
	I1018 12:17:15.868112  303392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-028309 --format={{.State.Status}}
	I1018 12:17:15.889538  303392 cli_runner.go:164] Run: docker exec default-k8s-diff-port-028309 stat /var/lib/dpkg/alternatives/iptables
	I1018 12:17:15.935717  303392 oci.go:144] the created container "default-k8s-diff-port-028309" has a running status.
	I1018 12:17:15.935747  303392 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/default-k8s-diff-port-028309/id_rsa...
	I1018 12:17:16.250940  303392 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-5865/.minikube/machines/default-k8s-diff-port-028309/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 12:17:16.282552  303392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-028309 --format={{.State.Status}}
	I1018 12:17:16.302191  303392 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 12:17:16.302212  303392 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-028309 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 12:17:16.355540  303392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-028309 --format={{.State.Status}}
	I1018 12:17:16.376024  303392 machine.go:93] provisionDockerMachine start ...
	I1018 12:17:16.376112  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:16.395817  303392 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:16.396165  303392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1018 12:17:16.396187  303392 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:17:16.533433  303392 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-028309
	
	I1018 12:17:16.533460  303392 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-028309"
	I1018 12:17:16.533528  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:16.553156  303392 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:16.553400  303392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1018 12:17:16.553416  303392 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-028309 && echo "default-k8s-diff-port-028309" | sudo tee /etc/hostname
	I1018 12:17:16.707408  303392 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-028309
	
	I1018 12:17:16.707493  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:16.731704  303392 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:16.732025  303392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1018 12:17:16.732060  303392 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-028309' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-028309/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-028309' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:17:16.879824  303392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:17:16.879858  303392 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:17:16.879883  303392 ubuntu.go:190] setting up certificates
	I1018 12:17:16.879895  303392 provision.go:84] configureAuth start
	I1018 12:17:16.879956  303392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-028309
	I1018 12:17:16.901411  303392 provision.go:143] copyHostCerts
	I1018 12:17:16.901473  303392 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:17:16.901487  303392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:17:16.901580  303392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:17:16.902243  303392 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:17:16.902265  303392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:17:16.902330  303392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:17:16.902433  303392 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:17:16.902445  303392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:17:16.902486  303392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:17:16.902559  303392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-028309 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-028309 localhost minikube]
	I1018 12:17:17.475066  303392 provision.go:177] copyRemoteCerts
	I1018 12:17:17.475128  303392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:17:17.475162  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:17.493468  303392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/default-k8s-diff-port-028309/id_rsa Username:docker}
	I1018 12:17:17.592023  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:17:17.616593  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 12:17:17.639348  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 12:17:17.660022  303392 provision.go:87] duration metric: took 780.113558ms to configureAuth
	I1018 12:17:17.660047  303392 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:17:17.660222  303392 config.go:182] Loaded profile config "default-k8s-diff-port-028309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:17:17.660343  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:17.680521  303392 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:17.680804  303392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1018 12:17:17.680830  303392 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:17:17.945969  303392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:17:17.946001  303392 machine.go:96] duration metric: took 1.569952227s to provisionDockerMachine
	I1018 12:17:17.946014  303392 client.go:171] duration metric: took 7.734907093s to LocalClient.Create
	I1018 12:17:17.946036  303392 start.go:167] duration metric: took 7.734975287s to libmachine.API.Create "default-k8s-diff-port-028309"
	I1018 12:17:17.946046  303392 start.go:293] postStartSetup for "default-k8s-diff-port-028309" (driver="docker")
	I1018 12:17:17.946060  303392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:17:17.946122  303392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:17:17.946169  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:17.965880  303392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/default-k8s-diff-port-028309/id_rsa Username:docker}
	I1018 12:17:18.071011  303392 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:17:18.075228  303392 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:17:18.075259  303392 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:17:18.075273  303392 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:17:18.075336  303392 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:17:18.075446  303392 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:17:18.075579  303392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:17:18.086195  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:17:18.118836  303392 start.go:296] duration metric: took 172.773702ms for postStartSetup
	I1018 12:17:18.119235  303392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-028309
	I1018 12:17:18.143686  303392 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/config.json ...
	I1018 12:17:18.143973  303392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:17:18.144013  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:18.167444  303392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/default-k8s-diff-port-028309/id_rsa Username:docker}
	I1018 12:17:18.280503  303392 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:17:18.287114  303392 start.go:128] duration metric: took 8.078425s to createHost
	I1018 12:17:18.287143  303392 start.go:83] releasing machines lock for "default-k8s-diff-port-028309", held for 8.078526872s
	I1018 12:17:18.287216  303392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-028309
	I1018 12:17:18.311862  303392 ssh_runner.go:195] Run: cat /version.json
	I1018 12:17:18.311924  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:18.312047  303392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:17:18.312123  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:18.340687  303392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/default-k8s-diff-port-028309/id_rsa Username:docker}
	I1018 12:17:18.341063  303392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/default-k8s-diff-port-028309/id_rsa Username:docker}
	I1018 12:17:18.526742  303392 ssh_runner.go:195] Run: systemctl --version
	I1018 12:17:18.535153  303392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:17:18.574803  303392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:17:18.580562  303392 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:17:18.580621  303392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:17:18.611420  303392 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 12:17:18.611447  303392 start.go:495] detecting cgroup driver to use...
	I1018 12:17:18.611485  303392 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:17:18.611537  303392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:17:18.633596  303392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:17:18.648429  303392 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:17:18.648493  303392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:17:18.669800  303392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:17:18.694052  303392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:17:18.786920  303392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:17:18.883823  303392 docker.go:234] disabling docker service ...
	I1018 12:17:18.883890  303392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:17:18.903035  303392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:17:18.917073  303392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:17:19.005318  303392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:17:19.093575  303392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:17:19.106427  303392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:17:19.121279  303392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:17:19.121342  303392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:19.132559  303392 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:17:19.132631  303392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:19.142771  303392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:19.152185  303392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:19.161843  303392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:17:19.170940  303392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:19.180720  303392 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:19.195395  303392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:19.205123  303392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:17:19.213211  303392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:17:19.221422  303392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:19.307098  303392 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:17:19.419859  303392 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:17:19.419914  303392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:17:19.424208  303392 start.go:563] Will wait 60s for crictl version
	I1018 12:17:19.424278  303392 ssh_runner.go:195] Run: which crictl
	I1018 12:17:19.428097  303392 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:17:19.453439  303392 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:17:19.453523  303392 ssh_runner.go:195] Run: crio --version
	I1018 12:17:19.483426  303392 ssh_runner.go:195] Run: crio --version
	I1018 12:17:19.514194  303392 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:17:19.193332  284991 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:19.592940  284991 pod_ready.go:94] pod "kube-scheduler-no-preload-406541" is "Ready"
	I1018 12:17:19.592969  284991 pod_ready.go:86] duration metric: took 399.614368ms for pod "kube-scheduler-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:19.592984  284991 pod_ready.go:40] duration metric: took 1.604049633s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:17:19.645987  284991 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:17:19.647961  284991 out.go:179] * Done! kubectl is now configured to use "no-preload-406541" cluster and "default" namespace by default
	I1018 12:17:19.515505  303392 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-028309 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:17:19.532795  303392 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 12:17:19.537047  303392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:17:19.547362  303392 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-028309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-028309 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:17:19.547478  303392 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:17:19.547519  303392 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:17:19.580110  303392 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:17:19.580131  303392 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:17:19.580173  303392 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:17:19.607803  303392 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:17:19.607829  303392 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:17:19.607838  303392 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1018 12:17:19.607930  303392 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-028309 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-028309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:17:19.608029  303392 ssh_runner.go:195] Run: crio config
	I1018 12:17:19.663204  303392 cni.go:84] Creating CNI manager for ""
	I1018 12:17:19.663226  303392 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:17:19.663243  303392 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:17:19.663265  303392 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-028309 NodeName:default-k8s-diff-port-028309 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:17:19.663413  303392 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-028309"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:17:19.663471  303392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:17:19.673382  303392 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:17:19.673471  303392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:17:19.683728  303392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1018 12:17:19.699354  303392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:17:19.716134  303392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1018 12:17:19.730855  303392 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:17:19.735754  303392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:17:19.747568  303392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:19.844411  303392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:17:19.864357  303392 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309 for IP: 192.168.103.2
	I1018 12:17:19.864378  303392 certs.go:195] generating shared ca certs ...
	I1018 12:17:19.864400  303392 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:19.864544  303392 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:17:19.864596  303392 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:17:19.864608  303392 certs.go:257] generating profile certs ...
	I1018 12:17:19.864691  303392 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/client.key
	I1018 12:17:19.864708  303392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/client.crt with IP's: []
	I1018 12:17:18.713847  295702 addons.go:514] duration metric: took 541.005493ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 12:17:18.985588  295702 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-175371" context rescaled to 1 replicas
	W1018 12:17:20.485494  295702 node_ready.go:57] node "embed-certs-175371" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 18 12:17:11 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:11.044968212Z" level=info msg="Starting container: d262cee4a47eef6a7956b49672db9275d9721142f738a71e6dff52d5c6207a7d" id=dc0b290e-1c73-4560-9923-c0a813f48b87 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:17:11 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:11.047459993Z" level=info msg="Started container" PID=2130 containerID=d262cee4a47eef6a7956b49672db9275d9721142f738a71e6dff52d5c6207a7d description=kube-system/coredns-5dd5756b68-s4wnq/coredns id=dc0b290e-1c73-4560-9923-c0a813f48b87 name=/runtime.v1.RuntimeService/StartContainer sandboxID=07d80cee7083a9a31c7a510f12a8790cff674aca300bf8caeee637534e0ec3e5
	Oct 18 12:17:15 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:15.382517889Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8f6aa343-e96f-417d-aaf4-f93d3b4cfd4c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:17:15 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:15.382602553Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:17:15 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:15.477249422Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ca9091a55ee35fae049bf4ca357f9718f631bf24625ae844de9ff8d7c5e08dc5 UID:864f752a-d618-4c5e-8c15-67818c8295e2 NetNS:/var/run/netns/3107e3a7-3fb9-4c68-a9ba-3c0b2b4b7719 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000507068}] Aliases:map[]}"
	Oct 18 12:17:15 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:15.47728351Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 12:17:15 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:15.488079768Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ca9091a55ee35fae049bf4ca357f9718f631bf24625ae844de9ff8d7c5e08dc5 UID:864f752a-d618-4c5e-8c15-67818c8295e2 NetNS:/var/run/netns/3107e3a7-3fb9-4c68-a9ba-3c0b2b4b7719 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000507068}] Aliases:map[]}"
	Oct 18 12:17:15 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:15.488258193Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 12:17:15 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:15.489127481Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 12:17:15 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:15.490747759Z" level=info msg="Ran pod sandbox ca9091a55ee35fae049bf4ca357f9718f631bf24625ae844de9ff8d7c5e08dc5 with infra container: default/busybox/POD" id=8f6aa343-e96f-417d-aaf4-f93d3b4cfd4c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:17:15 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:15.492082377Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6c266dba-a285-453c-a278-2a61121a6a80 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:17:15 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:15.492215918Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6c266dba-a285-453c-a278-2a61121a6a80 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:17:15 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:15.492262109Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=6c266dba-a285-453c-a278-2a61121a6a80 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:17:15 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:15.492839933Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7866ba1d-ed90-4581-ad1c-4e94e96f258f name=/runtime.v1.ImageService/PullImage
	Oct 18 12:17:15 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:15.497862129Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 12:17:16 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:16.802915539Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=7866ba1d-ed90-4581-ad1c-4e94e96f258f name=/runtime.v1.ImageService/PullImage
	Oct 18 12:17:16 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:16.804163029Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=378d4bfd-9f81-4684-ac0e-4abecce207e7 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:17:16 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:16.806753008Z" level=info msg="Creating container: default/busybox/busybox" id=f54946a9-34cf-4eca-9f2d-1421bf6b28ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:17:16 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:16.807749182Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:17:16 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:16.813116197Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:17:16 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:16.813689027Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:17:16 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:16.843200187Z" level=info msg="Created container a85224151b09a487c7269e12e5cce1c163307b7e48684333f7083a218a4317c4: default/busybox/busybox" id=f54946a9-34cf-4eca-9f2d-1421bf6b28ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:17:16 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:16.843866306Z" level=info msg="Starting container: a85224151b09a487c7269e12e5cce1c163307b7e48684333f7083a218a4317c4" id=e929ef17-d27e-4cae-882e-f153a1b80322 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:17:16 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:16.845568282Z" level=info msg="Started container" PID=2194 containerID=a85224151b09a487c7269e12e5cce1c163307b7e48684333f7083a218a4317c4 description=default/busybox/busybox id=e929ef17-d27e-4cae-882e-f153a1b80322 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ca9091a55ee35fae049bf4ca357f9718f631bf24625ae844de9ff8d7c5e08dc5
	Oct 18 12:17:24 old-k8s-version-024443 crio[773]: time="2025-10-18T12:17:24.015647224Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	a85224151b09a       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   ca9091a55ee35       busybox                                          default
	d262cee4a47ee       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      14 seconds ago      Running             coredns                   0                   07d80cee7083a       coredns-5dd5756b68-s4wnq                         kube-system
	010b13cd2d2b1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      15 seconds ago      Running             storage-provisioner       0                   ce62541bd063c       storage-provisioner                              kube-system
	5c49181e45960       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    26 seconds ago      Running             kindnet-cni               0                   542912e081ece       kindnet-g8pwk                                    kube-system
	a12955757c0a6       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      28 seconds ago      Running             kube-proxy                0                   e89d3d41b79d3       kube-proxy-tzlpd                                 kube-system
	199e95d85313f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      48 seconds ago      Running             etcd                      0                   eedd85106f6c6       etcd-old-k8s-version-024443                      kube-system
	2d7d321a73f4d       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      48 seconds ago      Running             kube-apiserver            0                   9e7bfbf11e4a8       kube-apiserver-old-k8s-version-024443            kube-system
	bffa8caddeca6       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      48 seconds ago      Running             kube-scheduler            0                   73386559300d1       kube-scheduler-old-k8s-version-024443            kube-system
	2303a096c4140       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      48 seconds ago      Running             kube-controller-manager   0                   0db692db8e45f       kube-controller-manager-old-k8s-version-024443   kube-system
	
	
	==> coredns [d262cee4a47eef6a7956b49672db9275d9721142f738a71e6dff52d5c6207a7d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45930 - 31040 "HINFO IN 9183219606432757328.4015466200558386009. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.035501963s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-024443
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-024443
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=old-k8s-version-024443
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_16_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:16:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-024443
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:17:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:17:14 +0000   Sat, 18 Oct 2025 12:16:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:17:14 +0000   Sat, 18 Oct 2025 12:16:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:17:14 +0000   Sat, 18 Oct 2025 12:16:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:17:14 +0000   Sat, 18 Oct 2025 12:17:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-024443
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                3a233bec-8fde-40ac-b97e-b54a8a6dbbef
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-s4wnq                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-old-k8s-version-024443                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         42s
	  kube-system                 kindnet-g8pwk                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-024443             250m (3%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-024443    200m (2%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-tzlpd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-024443             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 49s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node old-k8s-version-024443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-024443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node old-k8s-version-024443 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node old-k8s-version-024443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node old-k8s-version-024443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node old-k8s-version-024443 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node old-k8s-version-024443 event: Registered Node old-k8s-version-024443 in Controller
	  Normal  NodeReady                16s                kubelet          Node old-k8s-version-024443 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +11.948953] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 93 07 de 40 6d 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 2f a5 3a 37 fc 08 06
	[  +0.204454] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 4b 47 1f ce e5 08 06
	[Oct18 12:16] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 88 62 1b dd a7 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 f1 aa 42 b3 1d 08 06
	[  +0.000901] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +26.035563] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 9e 15 3f 0e e1 08 06
	[  +0.000631] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 55 46 ae a1 7f 08 06
	[  +2.492998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 63 10 7e 7b f1 08 06
	[  +0.001695] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	[ +18.118461] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e eb 77 72 c6 18 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	
	
	==> etcd [199e95d85313f4bc27402abe8f9f0db1026e3a436d5cd347c5dc0166c471d4ee] <==
	{"level":"warn","ts":"2025-10-18T12:16:55.879294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.790815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-10-18T12:16:55.879325Z","caller":"traceutil/trace.go:171","msg":"trace[348870404] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:343; }","duration":"161.830724ms","start":"2025-10-18T12:16:55.717487Z","end":"2025-10-18T12:16:55.879317Z","steps":["trace[348870404] 'agreement among raft nodes before linearized reading'  (duration: 161.74701ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:16:55.879476Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.382054ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-10-18T12:16:55.879504Z","caller":"traceutil/trace.go:171","msg":"trace[634277713] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/daemon-set-controller; range_end:; response_count:1; response_revision:343; }","duration":"212.444281ms","start":"2025-10-18T12:16:55.667051Z","end":"2025-10-18T12:16:55.879495Z","steps":["trace[634277713] 'agreement among raft nodes before linearized reading'  (duration: 212.374913ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:55.879927Z","caller":"traceutil/trace.go:171","msg":"trace[1582331400] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"251.534132ms","start":"2025-10-18T12:16:55.628379Z","end":"2025-10-18T12:16:55.879914Z","steps":["trace[1582331400] 'process raft request'  (duration: 250.452377ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:56.202205Z","caller":"traceutil/trace.go:171","msg":"trace[247494963] transaction","detail":"{read_only:false; response_revision:355; number_of_response:1; }","duration":"179.472273ms","start":"2025-10-18T12:16:56.022708Z","end":"2025-10-18T12:16:56.20218Z","steps":["trace[247494963] 'process raft request'  (duration: 136.407874ms)","trace[247494963] 'compare'  (duration: 42.923182ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T12:16:56.202979Z","caller":"traceutil/trace.go:171","msg":"trace[2009933255] linearizableReadLoop","detail":"{readStateIndex:373; appliedIndex:366; }","duration":"132.434381ms","start":"2025-10-18T12:16:56.070528Z","end":"2025-10-18T12:16:56.202963Z","steps":["trace[2009933255] 'read index received'  (duration: 88.637361ms)","trace[2009933255] 'applied index is now lower than readState.Index'  (duration: 43.784523ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T12:16:56.203237Z","caller":"traceutil/trace.go:171","msg":"trace[73104651] transaction","detail":"{read_only:false; response_revision:356; number_of_response:1; }","duration":"179.238214ms","start":"2025-10-18T12:16:56.023988Z","end":"2025-10-18T12:16:56.203226Z","steps":["trace[73104651] 'process raft request'  (duration: 178.650948ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:56.203384Z","caller":"traceutil/trace.go:171","msg":"trace[2131460249] transaction","detail":"{read_only:false; response_revision:357; number_of_response:1; }","duration":"172.272225ms","start":"2025-10-18T12:16:56.031103Z","end":"2025-10-18T12:16:56.203375Z","steps":["trace[2131460249] 'process raft request'  (duration: 171.636426ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:56.203576Z","caller":"traceutil/trace.go:171","msg":"trace[1092050707] transaction","detail":"{read_only:false; response_revision:358; number_of_response:1; }","duration":"171.413587ms","start":"2025-10-18T12:16:56.032155Z","end":"2025-10-18T12:16:56.203569Z","steps":["trace[1092050707] 'process raft request'  (duration: 170.63498ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:56.203675Z","caller":"traceutil/trace.go:171","msg":"trace[1328453307] transaction","detail":"{read_only:false; response_revision:359; number_of_response:1; }","duration":"170.612856ms","start":"2025-10-18T12:16:56.033053Z","end":"2025-10-18T12:16:56.203666Z","steps":["trace[1328453307] 'process raft request'  (duration: 169.787725ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:56.203836Z","caller":"traceutil/trace.go:171","msg":"trace[1082195352] transaction","detail":"{read_only:false; response_revision:360; number_of_response:1; }","duration":"162.182904ms","start":"2025-10-18T12:16:56.041637Z","end":"2025-10-18T12:16:56.20382Z","steps":["trace[1082195352] 'process raft request'  (duration: 161.228208ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:56.203938Z","caller":"traceutil/trace.go:171","msg":"trace[1665121893] transaction","detail":"{read_only:false; response_revision:361; number_of_response:1; }","duration":"136.46635ms","start":"2025-10-18T12:16:56.067465Z","end":"2025-10-18T12:16:56.203931Z","steps":["trace[1665121893] 'process raft request'  (duration: 135.434783ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:16:56.204302Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.800563ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3684"}
	{"level":"info","ts":"2025-10-18T12:16:56.204349Z","caller":"traceutil/trace.go:171","msg":"trace[1200166522] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:362; }","duration":"133.874214ms","start":"2025-10-18T12:16:56.070466Z","end":"2025-10-18T12:16:56.20434Z","steps":["trace[1200166522] 'agreement among raft nodes before linearized reading'  (duration: 133.761489ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:16:56.204575Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.057659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2025-10-18T12:16:56.204631Z","caller":"traceutil/trace.go:171","msg":"trace[715591521] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:362; }","duration":"109.120941ms","start":"2025-10-18T12:16:56.095499Z","end":"2025-10-18T12:16:56.20462Z","steps":["trace[715591521] 'agreement among raft nodes before linearized reading'  (duration: 109.013908ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:56.204204Z","caller":"traceutil/trace.go:171","msg":"trace[1384047310] transaction","detail":"{read_only:false; response_revision:362; number_of_response:1; }","duration":"131.324182ms","start":"2025-10-18T12:16:56.072865Z","end":"2025-10-18T12:16:56.204189Z","steps":["trace[1384047310] 'process raft request'  (duration: 130.061728ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:56.320566Z","caller":"traceutil/trace.go:171","msg":"trace[1269756955] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"108.466321ms","start":"2025-10-18T12:16:56.211861Z","end":"2025-10-18T12:16:56.320327Z","steps":["trace[1269756955] 'process raft request'  (duration: 85.813211ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:56.322055Z","caller":"traceutil/trace.go:171","msg":"trace[213631144] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"108.899084ms","start":"2025-10-18T12:16:56.212932Z","end":"2025-10-18T12:16:56.321831Z","steps":["trace[213631144] 'process raft request'  (duration: 84.917105ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:17:14.926202Z","caller":"traceutil/trace.go:171","msg":"trace[1309982110] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"103.993033ms","start":"2025-10-18T12:17:14.822188Z","end":"2025-10-18T12:17:14.926181Z","steps":["trace[1309982110] 'process raft request'  (duration: 103.861867ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:17:15.078542Z","caller":"traceutil/trace.go:171","msg":"trace[1801900389] linearizableReadLoop","detail":"{readStateIndex:472; appliedIndex:471; }","duration":"138.4422ms","start":"2025-10-18T12:17:14.940082Z","end":"2025-10-18T12:17:15.078525Z","steps":["trace[1801900389] 'read index received'  (duration: 138.301292ms)","trace[1801900389] 'applied index is now lower than readState.Index'  (duration: 140.287µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T12:17:15.078588Z","caller":"traceutil/trace.go:171","msg":"trace[1867978003] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"148.737772ms","start":"2025-10-18T12:17:14.929827Z","end":"2025-10-18T12:17:15.078564Z","steps":["trace[1867978003] 'process raft request'  (duration: 148.586155ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:17:15.078657Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.577616ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:1312"}
	{"level":"info","ts":"2025-10-18T12:17:15.078686Z","caller":"traceutil/trace.go:171","msg":"trace[1323846966] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:453; }","duration":"138.626323ms","start":"2025-10-18T12:17:14.940051Z","end":"2025-10-18T12:17:15.078677Z","steps":["trace[1323846966] 'agreement among raft nodes before linearized reading'  (duration: 138.547983ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:17:25 up 59 min,  0 user,  load average: 6.41, 4.43, 2.59
	Linux old-k8s-version-024443 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5c49181e459602d1eff09c38124c093125e06067118f07d4c9fbd6f7810c76ae] <==
	I1018 12:16:58.748562       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:16:58.748812       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 12:16:58.748986       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:16:58.749008       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:16:58.749038       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:16:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:16:58.949365       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:16:58.949428       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:16:58.949439       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:16:59.047075       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:16:59.349754       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:16:59.446856       1 metrics.go:72] Registering metrics
	I1018 12:16:59.447111       1 controller.go:711] "Syncing nftables rules"
	I1018 12:17:08.957885       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 12:17:08.958016       1 main.go:301] handling current node
	I1018 12:17:18.950453       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 12:17:18.950501       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2d7d321a73f4d099e75a213da0b9cd5c8564968cd60297e74829609d0649711a] <==
	I1018 12:16:40.022856       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 12:16:40.078266       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 12:16:40.078380       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1018 12:16:40.079083       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 12:16:40.079401       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 12:16:40.081455       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 12:16:40.086110       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 12:16:40.091493       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 12:16:40.097165       1 cache.go:39] Caches are synced for autoregister controller
	I1018 12:16:40.137569       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:16:40.883914       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 12:16:40.887533       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 12:16:40.887558       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:16:41.441737       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:16:41.482987       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:16:41.593849       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 12:16:41.600451       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1018 12:16:41.601542       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 12:16:41.607140       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:16:41.969840       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 12:16:43.463802       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 12:16:43.483368       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 12:16:43.503484       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1018 12:16:55.885854       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1018 12:16:55.886174       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2303a096c414053f24540361b2b68da4824c6e8ca84aadde1e2d36218ebc8d9d] <==
	I1018 12:16:55.019296       1 shared_informer.go:318] Caches are synced for deployment
	I1018 12:16:55.031504       1 shared_informer.go:318] Caches are synced for job
	I1018 12:16:55.043839       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1018 12:16:55.405608       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 12:16:55.459251       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 12:16:55.459289       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 12:16:55.982941       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1018 12:16:56.019686       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-g8pwk"
	I1018 12:16:56.032892       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tzlpd"
	I1018 12:16:56.207333       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xp8sb"
	I1018 12:16:56.339987       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-s4wnq"
	I1018 12:16:56.365172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="383.065381ms"
	I1018 12:16:56.420421       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.161545ms"
	I1018 12:16:56.420580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.066µs"
	I1018 12:16:56.817595       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1018 12:16:56.832740       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-xp8sb"
	I1018 12:16:56.843724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.066871ms"
	I1018 12:16:56.865907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.06815ms"
	I1018 12:16:56.866063       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.555µs"
	I1018 12:17:09.177734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.794µs"
	I1018 12:17:09.194305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.872µs"
	I1018 12:17:09.962026       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1018 12:17:11.679975       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.281µs"
	I1018 12:17:11.701031       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.618916ms"
	I1018 12:17:11.701147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.95µs"
	
	
	==> kube-proxy [a12955757c0a6ba2fdca295b870c6c1d7972309f01c031498b3ac945dd417072] <==
	I1018 12:16:56.697494       1 server_others.go:69] "Using iptables proxy"
	I1018 12:16:56.708699       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1018 12:16:56.739137       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:16:56.742410       1 server_others.go:152] "Using iptables Proxier"
	I1018 12:16:56.742522       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 12:16:56.742554       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 12:16:56.742618       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 12:16:56.743012       1 server.go:846] "Version info" version="v1.28.0"
	I1018 12:16:56.743037       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:16:56.743914       1 config.go:97] "Starting endpoint slice config controller"
	I1018 12:16:56.743976       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 12:16:56.743973       1 config.go:188] "Starting service config controller"
	I1018 12:16:56.744005       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 12:16:56.744039       1 config.go:315] "Starting node config controller"
	I1018 12:16:56.744046       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 12:16:56.844164       1 shared_informer.go:318] Caches are synced for node config
	I1018 12:16:56.844186       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1018 12:16:56.844193       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [bffa8caddeca6431384d3a4af048463ba376f7f5d1e074611868255d6350cb82] <==
	W1018 12:16:40.025177       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1018 12:16:40.025188       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1018 12:16:40.025871       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1018 12:16:40.025970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1018 12:16:40.026344       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1018 12:16:40.026454       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1018 12:16:40.026825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1018 12:16:40.026910       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1018 12:16:40.030641       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1018 12:16:40.030788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1018 12:16:40.939598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1018 12:16:40.939638       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1018 12:16:40.966065       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1018 12:16:40.966125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1018 12:16:40.984016       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1018 12:16:40.984057       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1018 12:16:41.033005       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1018 12:16:41.033046       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1018 12:16:41.161909       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1018 12:16:41.162052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1018 12:16:41.262540       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1018 12:16:41.262577       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1018 12:16:41.447393       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1018 12:16:41.447432       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1018 12:16:44.713683       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 12:16:56 old-k8s-version-024443 kubelet[1392]: I1018 12:16:56.330204    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d19b38b0-d7bc-4c78-8c03-60b85301d9d4-kube-proxy\") pod \"kube-proxy-tzlpd\" (UID: \"d19b38b0-d7bc-4c78-8c03-60b85301d9d4\") " pod="kube-system/kube-proxy-tzlpd"
	Oct 18 12:16:56 old-k8s-version-024443 kubelet[1392]: I1018 12:16:56.330235    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vjzp\" (UniqueName: \"kubernetes.io/projected/d19b38b0-d7bc-4c78-8c03-60b85301d9d4-kube-api-access-4vjzp\") pod \"kube-proxy-tzlpd\" (UID: \"d19b38b0-d7bc-4c78-8c03-60b85301d9d4\") " pod="kube-system/kube-proxy-tzlpd"
	Oct 18 12:16:56 old-k8s-version-024443 kubelet[1392]: I1018 12:16:56.330261    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d19b38b0-d7bc-4c78-8c03-60b85301d9d4-lib-modules\") pod \"kube-proxy-tzlpd\" (UID: \"d19b38b0-d7bc-4c78-8c03-60b85301d9d4\") " pod="kube-system/kube-proxy-tzlpd"
	Oct 18 12:16:56 old-k8s-version-024443 kubelet[1392]: I1018 12:16:56.330305    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d825bcd2-5610-4618-a451-3781667da707-cni-cfg\") pod \"kindnet-g8pwk\" (UID: \"d825bcd2-5610-4618-a451-3781667da707\") " pod="kube-system/kindnet-g8pwk"
	Oct 18 12:16:56 old-k8s-version-024443 kubelet[1392]: I1018 12:16:56.330333    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d825bcd2-5610-4618-a451-3781667da707-lib-modules\") pod \"kindnet-g8pwk\" (UID: \"d825bcd2-5610-4618-a451-3781667da707\") " pod="kube-system/kindnet-g8pwk"
	Oct 18 12:16:56 old-k8s-version-024443 kubelet[1392]: I1018 12:16:56.330377    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d825bcd2-5610-4618-a451-3781667da707-xtables-lock\") pod \"kindnet-g8pwk\" (UID: \"d825bcd2-5610-4618-a451-3781667da707\") " pod="kube-system/kindnet-g8pwk"
	Oct 18 12:16:56 old-k8s-version-024443 kubelet[1392]: I1018 12:16:56.330407    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d19b38b0-d7bc-4c78-8c03-60b85301d9d4-xtables-lock\") pod \"kube-proxy-tzlpd\" (UID: \"d19b38b0-d7bc-4c78-8c03-60b85301d9d4\") " pod="kube-system/kube-proxy-tzlpd"
	Oct 18 12:16:58 old-k8s-version-024443 kubelet[1392]: I1018 12:16:58.659971    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tzlpd" podStartSLOduration=3.659908886 podCreationTimestamp="2025-10-18 12:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:16:57.645151278 +0000 UTC m=+14.212600491" watchObservedRunningTime="2025-10-18 12:16:58.659908886 +0000 UTC m=+15.227358113"
	Oct 18 12:17:09 old-k8s-version-024443 kubelet[1392]: I1018 12:17:09.142281    1392 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 18 12:17:09 old-k8s-version-024443 kubelet[1392]: I1018 12:17:09.177247    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-g8pwk" podStartSLOduration=12.229437697 podCreationTimestamp="2025-10-18 12:16:55 +0000 UTC" firstStartedPulling="2025-10-18 12:16:56.538907649 +0000 UTC m=+13.106356845" lastFinishedPulling="2025-10-18 12:16:58.486666465 +0000 UTC m=+15.054115680" observedRunningTime="2025-10-18 12:16:58.660372506 +0000 UTC m=+15.227821723" watchObservedRunningTime="2025-10-18 12:17:09.177196532 +0000 UTC m=+25.744645746"
	Oct 18 12:17:09 old-k8s-version-024443 kubelet[1392]: I1018 12:17:09.177514    1392 topology_manager.go:215] "Topology Admit Handler" podUID="59e8e628-e270-400c-b0a5-a5aad16a309c" podNamespace="kube-system" podName="coredns-5dd5756b68-s4wnq"
	Oct 18 12:17:09 old-k8s-version-024443 kubelet[1392]: I1018 12:17:09.179645    1392 topology_manager.go:215] "Topology Admit Handler" podUID="2f69c3ee-cd53-4da2-9101-f6e46fb2d81a" podNamespace="kube-system" podName="storage-provisioner"
	Oct 18 12:17:09 old-k8s-version-024443 kubelet[1392]: W1018 12:17:09.179904    1392 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-024443" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-024443' and this object
	Oct 18 12:17:09 old-k8s-version-024443 kubelet[1392]: E1018 12:17:09.179949    1392 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-024443" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-024443' and this object
	Oct 18 12:17:09 old-k8s-version-024443 kubelet[1392]: I1018 12:17:09.328246    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx45j\" (UniqueName: \"kubernetes.io/projected/2f69c3ee-cd53-4da2-9101-f6e46fb2d81a-kube-api-access-sx45j\") pod \"storage-provisioner\" (UID: \"2f69c3ee-cd53-4da2-9101-f6e46fb2d81a\") " pod="kube-system/storage-provisioner"
	Oct 18 12:17:09 old-k8s-version-024443 kubelet[1392]: I1018 12:17:09.328316    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59e8e628-e270-400c-b0a5-a5aad16a309c-config-volume\") pod \"coredns-5dd5756b68-s4wnq\" (UID: \"59e8e628-e270-400c-b0a5-a5aad16a309c\") " pod="kube-system/coredns-5dd5756b68-s4wnq"
	Oct 18 12:17:09 old-k8s-version-024443 kubelet[1392]: I1018 12:17:09.328439    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2f69c3ee-cd53-4da2-9101-f6e46fb2d81a-tmp\") pod \"storage-provisioner\" (UID: \"2f69c3ee-cd53-4da2-9101-f6e46fb2d81a\") " pod="kube-system/storage-provisioner"
	Oct 18 12:17:09 old-k8s-version-024443 kubelet[1392]: I1018 12:17:09.328489    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pwfx\" (UniqueName: \"kubernetes.io/projected/59e8e628-e270-400c-b0a5-a5aad16a309c-kube-api-access-9pwfx\") pod \"coredns-5dd5756b68-s4wnq\" (UID: \"59e8e628-e270-400c-b0a5-a5aad16a309c\") " pod="kube-system/coredns-5dd5756b68-s4wnq"
	Oct 18 12:17:09 old-k8s-version-024443 kubelet[1392]: I1018 12:17:09.674541    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.674480729 podCreationTimestamp="2025-10-18 12:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:09.674134601 +0000 UTC m=+26.241583838" watchObservedRunningTime="2025-10-18 12:17:09.674480729 +0000 UTC m=+26.241929942"
	Oct 18 12:17:10 old-k8s-version-024443 kubelet[1392]: E1018 12:17:10.434253    1392 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Oct 18 12:17:10 old-k8s-version-024443 kubelet[1392]: E1018 12:17:10.434368    1392 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59e8e628-e270-400c-b0a5-a5aad16a309c-config-volume podName:59e8e628-e270-400c-b0a5-a5aad16a309c nodeName:}" failed. No retries permitted until 2025-10-18 12:17:10.934345557 +0000 UTC m=+27.501794762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/59e8e628-e270-400c-b0a5-a5aad16a309c-config-volume") pod "coredns-5dd5756b68-s4wnq" (UID: "59e8e628-e270-400c-b0a5-a5aad16a309c") : failed to sync configmap cache: timed out waiting for the condition
	Oct 18 12:17:11 old-k8s-version-024443 kubelet[1392]: I1018 12:17:11.679416    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-s4wnq" podStartSLOduration=15.679366176 podCreationTimestamp="2025-10-18 12:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:11.679243115 +0000 UTC m=+28.246692328" watchObservedRunningTime="2025-10-18 12:17:11.679366176 +0000 UTC m=+28.246815392"
	Oct 18 12:17:15 old-k8s-version-024443 kubelet[1392]: I1018 12:17:15.080161    1392 topology_manager.go:215] "Topology Admit Handler" podUID="864f752a-d618-4c5e-8c15-67818c8295e2" podNamespace="default" podName="busybox"
	Oct 18 12:17:15 old-k8s-version-024443 kubelet[1392]: I1018 12:17:15.174497    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts4kx\" (UniqueName: \"kubernetes.io/projected/864f752a-d618-4c5e-8c15-67818c8295e2-kube-api-access-ts4kx\") pod \"busybox\" (UID: \"864f752a-d618-4c5e-8c15-67818c8295e2\") " pod="default/busybox"
	Oct 18 12:17:17 old-k8s-version-024443 kubelet[1392]: I1018 12:17:17.695620    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.384744881 podCreationTimestamp="2025-10-18 12:17:14 +0000 UTC" firstStartedPulling="2025-10-18 12:17:15.492475605 +0000 UTC m=+32.059924809" lastFinishedPulling="2025-10-18 12:17:16.8032919 +0000 UTC m=+33.370741108" observedRunningTime="2025-10-18 12:17:17.695186939 +0000 UTC m=+34.262636152" watchObservedRunningTime="2025-10-18 12:17:17.69556118 +0000 UTC m=+34.263010396"
	
	
	==> storage-provisioner [010b13cd2d2b186a0bf2f336a728244f89251748ba58539f952c7b9ee4ec528e] <==
	I1018 12:17:09.542709       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 12:17:09.553139       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 12:17:09.553192       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 12:17:09.562841       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 12:17:09.563152       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-024443_7f414deb-d421-4a32-a16a-b841597c18d3!
	I1018 12:17:09.563488       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ea2eab2-c98b-4fde-9bd6-441433386ca3", APIVersion:"v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-024443_7f414deb-d421-4a32-a16a-b841597c18d3 became leader
	I1018 12:17:09.663328       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-024443_7f414deb-d421-4a32-a16a-b841597c18d3!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-024443 -n old-k8s-version-024443
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-024443 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-406541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-406541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (269.849666ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:17:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-406541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-406541 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-406541 describe deploy/metrics-server -n kube-system: exit status 1 (63.097902ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-406541 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-406541
helpers_test.go:243: (dbg) docker inspect no-preload-406541:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193",
	        "Created": "2025-10-18T12:16:27.38049252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 285860,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:16:27.426465904Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193/hostname",
	        "HostsPath": "/var/lib/docker/containers/3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193/hosts",
	        "LogPath": "/var/lib/docker/containers/3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193/3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193-json.log",
	        "Name": "/no-preload-406541",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-406541:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-406541",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193",
	                "LowerDir": "/var/lib/docker/overlay2/452b7a0353cc5fb49e7b2dc67c3eec0928606c730e569bf04fd69beda34a8483-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/452b7a0353cc5fb49e7b2dc67c3eec0928606c730e569bf04fd69beda34a8483/merged",
	                "UpperDir": "/var/lib/docker/overlay2/452b7a0353cc5fb49e7b2dc67c3eec0928606c730e569bf04fd69beda34a8483/diff",
	                "WorkDir": "/var/lib/docker/overlay2/452b7a0353cc5fb49e7b2dc67c3eec0928606c730e569bf04fd69beda34a8483/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-406541",
	                "Source": "/var/lib/docker/volumes/no-preload-406541/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-406541",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-406541",
	                "name.minikube.sigs.k8s.io": "no-preload-406541",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a192488b3c8d060a4dc601700415a19156a9bc103fe7086184dc0b6b28eb98c9",
	            "SandboxKey": "/var/run/docker/netns/a192488b3c8d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-406541": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:5f:31:87:19:ab",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dc7610ce545693ef1e28eeee1b4922dd1bc5e4eb71b054fa064c5359b8ecf50a",
	                    "EndpointID": "ce0c269317ef7d7c61da25131730a478d525c0e4343f11c74629d0b978f5c58e",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-406541",
	                        "3111cdfbd44a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-406541 -n no-preload-406541
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-406541 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-406541 logs -n 25: (1.100096148s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-376567 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ ssh     │ -p bridge-376567 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ ssh     │ -p bridge-376567 sudo docker system info                                                                                                                                 │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ ssh     │ -p bridge-376567 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ ssh     │ -p bridge-376567 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ ssh     │ -p bridge-376567 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cri-dockerd --version                                                                                                                              │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ ssh     │ -p bridge-376567 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo containerd config dump                                                                                                                             │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo crio config                                                                                                                                        │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ delete  │ -p bridge-376567                                                                                                                                                         │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ delete  │ -p disable-driver-mounts-200198                                                                                                                                          │ disable-driver-mounts-200198 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-024443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p old-k8s-version-024443 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-406541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:17:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:17:09.989378  303392 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:17:09.989603  303392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:17:09.989610  303392 out.go:374] Setting ErrFile to fd 2...
	I1018 12:17:09.989615  303392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:17:09.989923  303392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:17:09.990416  303392 out.go:368] Setting JSON to false
	I1018 12:17:09.991870  303392 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3578,"bootTime":1760786252,"procs":395,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:17:09.991983  303392 start.go:141] virtualization: kvm guest
	I1018 12:17:09.994556  303392 out.go:179] * [default-k8s-diff-port-028309] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:17:09.996134  303392 notify.go:220] Checking for updates...
	I1018 12:17:09.996189  303392 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:17:09.997726  303392 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:17:09.999143  303392 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:17:10.000462  303392 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:17:10.001920  303392 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:17:10.003352  303392 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:17:10.004974  303392 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:17:10.005114  303392 config.go:182] Loaded profile config "no-preload-406541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:17:10.005250  303392 config.go:182] Loaded profile config "old-k8s-version-024443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 12:17:10.005400  303392 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:17:10.030342  303392 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:17:10.030426  303392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:17:10.097435  303392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:17:10.084190507 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:17:10.097535  303392 docker.go:318] overlay module found
	I1018 12:17:10.098905  303392 out.go:179] * Using the docker driver based on user configuration
	I1018 12:17:10.100491  303392 start.go:305] selected driver: docker
	I1018 12:17:10.100527  303392 start.go:925] validating driver "docker" against <nil>
	I1018 12:17:10.100543  303392 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:17:10.101335  303392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:17:10.178495  303392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:17:10.16872536 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:17:10.178723  303392 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 12:17:10.179048  303392 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:10.180927  303392 out.go:179] * Using Docker driver with root privileges
	I1018 12:17:10.182188  303392 cni.go:84] Creating CNI manager for ""
	I1018 12:17:10.182255  303392 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:17:10.182266  303392 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:17:10.182339  303392 start.go:349] cluster config:
	{Name:default-k8s-diff-port-028309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-028309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:17:10.183812  303392 out.go:179] * Starting "default-k8s-diff-port-028309" primary control-plane node in "default-k8s-diff-port-028309" cluster
	I1018 12:17:10.185119  303392 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:17:10.186484  303392 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:17:10.187909  303392 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:17:10.187946  303392 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:17:10.187954  303392 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 12:17:10.187983  303392 cache.go:58] Caching tarball of preloaded images
	I1018 12:17:10.188065  303392 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:17:10.188075  303392 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:17:10.188150  303392 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/config.json ...
	I1018 12:17:10.188169  303392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/config.json: {Name:mk0a7583c0b13847b99f7e6327a163d03ca928e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:10.208446  303392 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:17:10.208469  303392 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:17:10.208484  303392 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:17:10.208516  303392 start.go:360] acquireMachinesLock for default-k8s-diff-port-028309: {Name:mk2adb3e724bc0ee6357d7bccded98e7948efa53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:10.208604  303392 start.go:364] duration metric: took 73.641µs to acquireMachinesLock for "default-k8s-diff-port-028309"
	I1018 12:17:10.208627  303392 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-028309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-028309 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:17:10.208677  303392 start.go:125] createHost starting for "" (driver="docker")
	I1018 12:17:05.934529  295702 out.go:252]   - Booting up control plane ...
	I1018 12:17:05.934661  295702 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:17:05.934791  295702 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:17:05.934878  295702 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:17:05.952629  295702 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:17:05.953293  295702 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:17:05.961996  295702 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:17:05.962324  295702 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:17:05.962398  295702 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:17:06.071804  295702 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:17:06.071988  295702 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:17:07.573949  295702 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501638356s
	I1018 12:17:07.578334  295702 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:17:07.578454  295702 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 12:17:07.578569  295702 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:17:07.578705  295702 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:17:09.709217  295702 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.130843742s
	I1018 12:17:09.915172  295702 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.336880091s
	W1018 12:17:08.287122  284229 node_ready.go:57] node "old-k8s-version-024443" has "Ready":"False" status (will retry)
	I1018 12:17:09.287337  284229 node_ready.go:49] node "old-k8s-version-024443" is "Ready"
	I1018 12:17:09.287370  284229 node_ready.go:38] duration metric: took 12.503640215s for node "old-k8s-version-024443" to be "Ready" ...
	I1018 12:17:09.287387  284229 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:17:09.287439  284229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:17:09.303391  284229 api_server.go:72] duration metric: took 13.348683953s to wait for apiserver process to appear ...
	I1018 12:17:09.303420  284229 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:17:09.303566  284229 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:17:09.309885  284229 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 12:17:09.311661  284229 api_server.go:141] control plane version: v1.28.0
	I1018 12:17:09.311687  284229 api_server.go:131] duration metric: took 8.260308ms to wait for apiserver health ...
	I1018 12:17:09.311697  284229 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:17:09.315954  284229 system_pods.go:59] 8 kube-system pods found
	I1018 12:17:09.315990  284229 system_pods.go:61] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:09.315998  284229 system_pods.go:61] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running
	I1018 12:17:09.316006  284229 system_pods.go:61] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:09.316011  284229 system_pods.go:61] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running
	I1018 12:17:09.316018  284229 system_pods.go:61] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running
	I1018 12:17:09.316023  284229 system_pods.go:61] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:09.316028  284229 system_pods.go:61] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running
	I1018 12:17:09.316035  284229 system_pods.go:61] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:09.316044  284229 system_pods.go:74] duration metric: took 4.340144ms to wait for pod list to return data ...
	I1018 12:17:09.316057  284229 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:17:09.318622  284229 default_sa.go:45] found service account: "default"
	I1018 12:17:09.318644  284229 default_sa.go:55] duration metric: took 2.580433ms for default service account to be created ...
	I1018 12:17:09.318654  284229 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:17:09.322568  284229 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:09.322607  284229 system_pods.go:89] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:09.322616  284229 system_pods.go:89] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running
	I1018 12:17:09.322626  284229 system_pods.go:89] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:09.322631  284229 system_pods.go:89] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running
	I1018 12:17:09.322637  284229 system_pods.go:89] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running
	I1018 12:17:09.322643  284229 system_pods.go:89] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:09.322652  284229 system_pods.go:89] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running
	I1018 12:17:09.322659  284229 system_pods.go:89] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:09.322688  284229 retry.go:31] will retry after 255.110485ms: missing components: kube-dns
	I1018 12:17:09.585508  284229 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:09.585549  284229 system_pods.go:89] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:09.585562  284229 system_pods.go:89] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running
	I1018 12:17:09.585571  284229 system_pods.go:89] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:09.585577  284229 system_pods.go:89] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running
	I1018 12:17:09.585583  284229 system_pods.go:89] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running
	I1018 12:17:09.585588  284229 system_pods.go:89] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:09.585596  284229 system_pods.go:89] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running
	I1018 12:17:09.585603  284229 system_pods.go:89] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:09.585623  284229 retry.go:31] will retry after 295.668626ms: missing components: kube-dns
	I1018 12:17:09.889287  284229 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:09.889322  284229 system_pods.go:89] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:09.889332  284229 system_pods.go:89] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running
	I1018 12:17:09.889401  284229 system_pods.go:89] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:09.889409  284229 system_pods.go:89] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running
	I1018 12:17:09.889456  284229 system_pods.go:89] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running
	I1018 12:17:09.889462  284229 system_pods.go:89] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:09.889467  284229 system_pods.go:89] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running
	I1018 12:17:09.889472  284229 system_pods.go:89] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Running
	I1018 12:17:09.889491  284229 retry.go:31] will retry after 391.466411ms: missing components: kube-dns
	I1018 12:17:10.285621  284229 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:10.285657  284229 system_pods.go:89] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:10.285664  284229 system_pods.go:89] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running
	I1018 12:17:10.285672  284229 system_pods.go:89] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:10.285678  284229 system_pods.go:89] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running
	I1018 12:17:10.285684  284229 system_pods.go:89] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running
	I1018 12:17:10.285689  284229 system_pods.go:89] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:10.285695  284229 system_pods.go:89] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running
	I1018 12:17:10.285700  284229 system_pods.go:89] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Running
	I1018 12:17:10.285721  284229 retry.go:31] will retry after 502.967549ms: missing components: kube-dns
	I1018 12:17:10.793348  284229 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:10.793384  284229 system_pods.go:89] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:10.793391  284229 system_pods.go:89] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running
	I1018 12:17:10.793397  284229 system_pods.go:89] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:10.793404  284229 system_pods.go:89] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running
	I1018 12:17:10.793410  284229 system_pods.go:89] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running
	I1018 12:17:10.793416  284229 system_pods.go:89] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:10.793421  284229 system_pods.go:89] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running
	I1018 12:17:10.793430  284229 system_pods.go:89] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Running
	I1018 12:17:10.793448  284229 retry.go:31] will retry after 680.741844ms: missing components: kube-dns
	I1018 12:17:11.580325  295702 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.00195535s
	I1018 12:17:11.594486  295702 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:17:11.606936  295702 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:17:11.619839  295702 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:17:11.620244  295702 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-175371 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:17:11.628956  295702 kubeadm.go:318] [bootstrap-token] Using token: s0eyel.sxikqwsssyd1yq10
	I1018 12:17:11.630435  295702 out.go:252]   - Configuring RBAC rules ...
	I1018 12:17:11.630592  295702 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 12:17:11.634025  295702 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 12:17:11.643366  295702 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 12:17:11.646654  295702 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 12:17:11.649593  295702 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 12:17:11.652274  295702 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 12:17:11.988043  295702 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 12:17:12.418439  295702 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 12:17:12.986955  295702 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 12:17:12.987871  295702 kubeadm.go:318] 
	I1018 12:17:12.987931  295702 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 12:17:12.987938  295702 kubeadm.go:318] 
	I1018 12:17:12.988029  295702 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 12:17:12.988039  295702 kubeadm.go:318] 
	I1018 12:17:12.988084  295702 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 12:17:12.988144  295702 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 12:17:12.988273  295702 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 12:17:12.988292  295702 kubeadm.go:318] 
	I1018 12:17:12.988352  295702 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 12:17:12.988360  295702 kubeadm.go:318] 
	I1018 12:17:12.988414  295702 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 12:17:12.988422  295702 kubeadm.go:318] 
	I1018 12:17:12.988486  295702 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 12:17:12.988571  295702 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 12:17:12.988653  295702 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 12:17:12.988670  295702 kubeadm.go:318] 
	I1018 12:17:12.988820  295702 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 12:17:12.988927  295702 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 12:17:12.988937  295702 kubeadm.go:318] 
	I1018 12:17:12.989070  295702 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token s0eyel.sxikqwsssyd1yq10 \
	I1018 12:17:12.989196  295702 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d \
	I1018 12:17:12.989233  295702 kubeadm.go:318] 	--control-plane 
	I1018 12:17:12.989246  295702 kubeadm.go:318] 
	I1018 12:17:12.989361  295702 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 12:17:12.989374  295702 kubeadm.go:318] 
	I1018 12:17:12.989481  295702 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token s0eyel.sxikqwsssyd1yq10 \
	I1018 12:17:12.989615  295702 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d 
	I1018 12:17:12.992521  295702 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 12:17:12.992707  295702 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 12:17:12.992731  295702 cni.go:84] Creating CNI manager for ""
	I1018 12:17:12.992741  295702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:17:12.996653  295702 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 12:17:11.479364  284229 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:11.479395  284229 system_pods.go:89] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:11.479401  284229 system_pods.go:89] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running
	I1018 12:17:11.479407  284229 system_pods.go:89] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:11.479410  284229 system_pods.go:89] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running
	I1018 12:17:11.479414  284229 system_pods.go:89] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running
	I1018 12:17:11.479423  284229 system_pods.go:89] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:11.479427  284229 system_pods.go:89] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running
	I1018 12:17:11.479430  284229 system_pods.go:89] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Running
	I1018 12:17:11.479444  284229 retry.go:31] will retry after 842.277236ms: missing components: kube-dns
	I1018 12:17:12.326663  284229 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:12.326690  284229 system_pods.go:89] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Running
	I1018 12:17:12.326696  284229 system_pods.go:89] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running
	I1018 12:17:12.326699  284229 system_pods.go:89] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:12.326702  284229 system_pods.go:89] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running
	I1018 12:17:12.326706  284229 system_pods.go:89] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running
	I1018 12:17:12.326709  284229 system_pods.go:89] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:12.326712  284229 system_pods.go:89] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running
	I1018 12:17:12.326714  284229 system_pods.go:89] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Running
	I1018 12:17:12.326722  284229 system_pods.go:126] duration metric: took 3.0080623s to wait for k8s-apps to be running ...
	I1018 12:17:12.326742  284229 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:17:12.326805  284229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:17:12.341688  284229 system_svc.go:56] duration metric: took 14.934271ms WaitForService to wait for kubelet
	I1018 12:17:12.341736  284229 kubeadm.go:586] duration metric: took 16.387033243s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:12.341772  284229 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:17:12.344633  284229 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:17:12.344659  284229 node_conditions.go:123] node cpu capacity is 8
	I1018 12:17:12.344672  284229 node_conditions.go:105] duration metric: took 2.893864ms to run NodePressure ...
	I1018 12:17:12.344682  284229 start.go:241] waiting for startup goroutines ...
	I1018 12:17:12.344689  284229 start.go:246] waiting for cluster config update ...
	I1018 12:17:12.344698  284229 start.go:255] writing updated cluster config ...
	I1018 12:17:12.345000  284229 ssh_runner.go:195] Run: rm -f paused
	I1018 12:17:12.349094  284229 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:17:12.354126  284229 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-s4wnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:12.359940  284229 pod_ready.go:94] pod "coredns-5dd5756b68-s4wnq" is "Ready"
	I1018 12:17:12.359973  284229 pod_ready.go:86] duration metric: took 5.816686ms for pod "coredns-5dd5756b68-s4wnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:12.363596  284229 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:12.368832  284229 pod_ready.go:94] pod "etcd-old-k8s-version-024443" is "Ready"
	I1018 12:17:12.368858  284229 pod_ready.go:86] duration metric: took 5.237265ms for pod "etcd-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:12.377223  284229 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:12.387405  284229 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-024443" is "Ready"
	I1018 12:17:12.387437  284229 pod_ready.go:86] duration metric: took 10.185515ms for pod "kube-apiserver-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:12.394408  284229 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:12.753723  284229 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-024443" is "Ready"
	I1018 12:17:12.753751  284229 pod_ready.go:86] duration metric: took 359.309074ms for pod "kube-controller-manager-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:12.954388  284229 pod_ready.go:83] waiting for pod "kube-proxy-tzlpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:13.353537  284229 pod_ready.go:94] pod "kube-proxy-tzlpd" is "Ready"
	I1018 12:17:13.353563  284229 pod_ready.go:86] duration metric: took 399.15221ms for pod "kube-proxy-tzlpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:13.554517  284229 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:13.953343  284229 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-024443" is "Ready"
	I1018 12:17:13.953372  284229 pod_ready.go:86] duration metric: took 398.824901ms for pod "kube-scheduler-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:13.953386  284229 pod_ready.go:40] duration metric: took 1.604257018s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:17:14.000846  284229 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	W1018 12:17:11.297656  284991 node_ready.go:57] node "no-preload-406541" has "Ready":"False" status (will retry)
	W1018 12:17:13.307149  284991 node_ready.go:57] node "no-preload-406541" has "Ready":"False" status (will retry)
	I1018 12:17:14.084909  284229 out.go:203] 
	W1018 12:17:14.120594  284229 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 12:17:14.162086  284229 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 12:17:14.307271  284229 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-024443" cluster and "default" namespace by default
	I1018 12:17:10.210809  303392 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 12:17:10.211061  303392 start.go:159] libmachine.API.Create for "default-k8s-diff-port-028309" (driver="docker")
	I1018 12:17:10.211096  303392 client.go:168] LocalClient.Create starting
	I1018 12:17:10.211197  303392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem
	I1018 12:17:10.211253  303392 main.go:141] libmachine: Decoding PEM data...
	I1018 12:17:10.211271  303392 main.go:141] libmachine: Parsing certificate...
	I1018 12:17:10.211332  303392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem
	I1018 12:17:10.211353  303392 main.go:141] libmachine: Decoding PEM data...
	I1018 12:17:10.211371  303392 main.go:141] libmachine: Parsing certificate...
	I1018 12:17:10.211699  303392 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-028309 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 12:17:10.230582  303392 cli_runner.go:211] docker network inspect default-k8s-diff-port-028309 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 12:17:10.230656  303392 network_create.go:284] running [docker network inspect default-k8s-diff-port-028309] to gather additional debugging logs...
	I1018 12:17:10.230674  303392 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-028309
	W1018 12:17:10.248645  303392 cli_runner.go:211] docker network inspect default-k8s-diff-port-028309 returned with exit code 1
	I1018 12:17:10.248679  303392 network_create.go:287] error running [docker network inspect default-k8s-diff-port-028309]: docker network inspect default-k8s-diff-port-028309: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-028309 not found
	I1018 12:17:10.248696  303392 network_create.go:289] output of [docker network inspect default-k8s-diff-port-028309]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-028309 not found
	
	** /stderr **
	I1018 12:17:10.248852  303392 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:17:10.267437  303392 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1c78aef7d2ee IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:19:5a:10:36:f4} reservation:<nil>}
	I1018 12:17:10.268053  303392 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6069a4ec9777 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:f7:2a:6b:48:b9} reservation:<nil>}
	I1018 12:17:10.268754  303392 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-670e794a7c9f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:d0:78:df:c7:fd} reservation:<nil>}
	I1018 12:17:10.269394  303392 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8bb34d522296 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:fc:1a:65:23:03} reservation:<nil>}
	I1018 12:17:10.269923  303392 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-704be5e99155 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:26:69:ed:e3:bb:73} reservation:<nil>}
	I1018 12:17:10.270995  303392 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-dc7610ce5456 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:b6:7c:0a:6d:c2:9c} reservation:<nil>}
	I1018 12:17:10.272601  303392 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed5210}
	I1018 12:17:10.272633  303392 network_create.go:124] attempt to create docker network default-k8s-diff-port-028309 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1018 12:17:10.272685  303392 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-028309 default-k8s-diff-port-028309
	I1018 12:17:10.333924  303392 network_create.go:108] docker network default-k8s-diff-port-028309 192.168.103.0/24 created
	I1018 12:17:10.333952  303392 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-028309" container
	I1018 12:17:10.334071  303392 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 12:17:10.351696  303392 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-028309 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-028309 --label created_by.minikube.sigs.k8s.io=true
	I1018 12:17:10.370496  303392 oci.go:103] Successfully created a docker volume default-k8s-diff-port-028309
	I1018 12:17:10.370599  303392 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-028309-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-028309 --entrypoint /usr/bin/test -v default-k8s-diff-port-028309:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 12:17:10.766141  303392 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-028309
	I1018 12:17:10.766175  303392 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:17:10.766195  303392 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 12:17:10.766251  303392 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-028309:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 12:17:12.998079  295702 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 12:17:13.003370  295702 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 12:17:13.003388  295702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 12:17:13.017136  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 12:17:13.262082  295702 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:17:13.262262  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-175371 minikube.k8s.io/updated_at=2025_10_18T12_17_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=embed-certs-175371 minikube.k8s.io/primary=true
	I1018 12:17:13.262420  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:13.275244  295702 ops.go:34] apiserver oom_adj: -16
	I1018 12:17:13.576589  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:14.076753  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:14.577362  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:15.076879  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:15.576880  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:16.076879  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:16.576927  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:17.076975  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:17.577462  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:18.077589  295702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:17:18.169822  295702 kubeadm.go:1113] duration metric: took 4.907730706s to wait for elevateKubeSystemPrivileges
	I1018 12:17:18.169943  295702 kubeadm.go:402] duration metric: took 15.899918067s to StartCluster
	I1018 12:17:18.169982  295702 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:18.170092  295702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:17:18.172421  295702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:18.172713  295702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 12:17:18.172723  295702 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:17:18.172836  295702 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:17:18.172920  295702 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-175371"
	I1018 12:17:18.172939  295702 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-175371"
	I1018 12:17:18.172969  295702 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:17:18.172982  295702 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:17:18.173071  295702 addons.go:69] Setting default-storageclass=true in profile "embed-certs-175371"
	I1018 12:17:18.173091  295702 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-175371"
	I1018 12:17:18.173465  295702 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:17:18.174383  295702 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:17:18.177470  295702 out.go:179] * Verifying Kubernetes components...
	I1018 12:17:18.179118  295702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:18.203275  295702 addons.go:238] Setting addon default-storageclass=true in "embed-certs-175371"
	I1018 12:17:18.203323  295702 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:17:18.203854  295702 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:17:18.203998  295702 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:17:18.205863  295702 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:17:18.205894  295702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:17:18.205953  295702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:17:18.234520  295702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:17:18.237786  295702 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:17:18.237809  295702 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:17:18.237882  295702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:17:18.263799  295702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:17:18.283808  295702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 12:17:18.353452  295702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:17:18.360988  295702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:17:18.385433  295702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:17:18.481027  295702 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 12:17:18.482217  295702 node_ready.go:35] waiting up to 6m0s for node "embed-certs-175371" to be "Ready" ...
	I1018 12:17:18.712676  295702 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1018 12:17:15.796454  284991 node_ready.go:57] node "no-preload-406541" has "Ready":"False" status (will retry)
	I1018 12:17:17.297044  284991 node_ready.go:49] node "no-preload-406541" is "Ready"
	I1018 12:17:17.297072  284991 node_ready.go:38] duration metric: took 12.503291692s for node "no-preload-406541" to be "Ready" ...
	I1018 12:17:17.297084  284991 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:17:17.297128  284991 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:17:17.309995  284991 api_server.go:72] duration metric: took 12.944612407s to wait for apiserver process to appear ...
	I1018 12:17:17.310026  284991 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:17:17.310046  284991 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 12:17:17.314280  284991 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1018 12:17:17.315126  284991 api_server.go:141] control plane version: v1.34.1
	I1018 12:17:17.315146  284991 api_server.go:131] duration metric: took 5.114723ms to wait for apiserver health ...
	I1018 12:17:17.315154  284991 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:17:17.319212  284991 system_pods.go:59] 8 kube-system pods found
	I1018 12:17:17.319248  284991 system_pods.go:61] "coredns-66bc5c9577-bwvrq" [eee9c519-7100-41a0-8a95-6daae8b6b46b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:17.319255  284991 system_pods.go:61] "etcd-no-preload-406541" [32415a7e-882e-4c2f-b369-3841d4c57482] Running
	I1018 12:17:17.319261  284991 system_pods.go:61] "kindnet-dwg7c" [d2ecaa2c-b1fd-4635-8521-39461256e9ec] Running
	I1018 12:17:17.319274  284991 system_pods.go:61] "kube-apiserver-no-preload-406541" [179f86d1-c11f-42fb-821a-a7c4877492d3] Running
	I1018 12:17:17.319282  284991 system_pods.go:61] "kube-controller-manager-no-preload-406541" [092fc484-967e-4890-aa37-e52f994dfb9e] Running
	I1018 12:17:17.319286  284991 system_pods.go:61] "kube-proxy-9vbmr" [396c662e-9914-4ffe-a26e-4fff6e123577] Running
	I1018 12:17:17.319289  284991 system_pods.go:61] "kube-scheduler-no-preload-406541" [08ef79d5-dedd-4034-8278-ddd13a8a6dbd] Running
	I1018 12:17:17.319294  284991 system_pods.go:61] "storage-provisioner" [7c61b5da-ef85-46ff-a054-051967cf9d79] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:17.319302  284991 system_pods.go:74] duration metric: took 4.14335ms to wait for pod list to return data ...
	I1018 12:17:17.319309  284991 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:17:17.321902  284991 default_sa.go:45] found service account: "default"
	I1018 12:17:17.321920  284991 default_sa.go:55] duration metric: took 2.606649ms for default service account to be created ...
	I1018 12:17:17.321928  284991 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:17:17.324418  284991 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:17.324440  284991 system_pods.go:89] "coredns-66bc5c9577-bwvrq" [eee9c519-7100-41a0-8a95-6daae8b6b46b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:17.324448  284991 system_pods.go:89] "etcd-no-preload-406541" [32415a7e-882e-4c2f-b369-3841d4c57482] Running
	I1018 12:17:17.324458  284991 system_pods.go:89] "kindnet-dwg7c" [d2ecaa2c-b1fd-4635-8521-39461256e9ec] Running
	I1018 12:17:17.324464  284991 system_pods.go:89] "kube-apiserver-no-preload-406541" [179f86d1-c11f-42fb-821a-a7c4877492d3] Running
	I1018 12:17:17.324471  284991 system_pods.go:89] "kube-controller-manager-no-preload-406541" [092fc484-967e-4890-aa37-e52f994dfb9e] Running
	I1018 12:17:17.324488  284991 system_pods.go:89] "kube-proxy-9vbmr" [396c662e-9914-4ffe-a26e-4fff6e123577] Running
	I1018 12:17:17.324493  284991 system_pods.go:89] "kube-scheduler-no-preload-406541" [08ef79d5-dedd-4034-8278-ddd13a8a6dbd] Running
	I1018 12:17:17.324500  284991 system_pods.go:89] "storage-provisioner" [7c61b5da-ef85-46ff-a054-051967cf9d79] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:17.324522  284991 retry.go:31] will retry after 270.937375ms: missing components: kube-dns
	I1018 12:17:17.600079  284991 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:17.600111  284991 system_pods.go:89] "coredns-66bc5c9577-bwvrq" [eee9c519-7100-41a0-8a95-6daae8b6b46b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:17.600118  284991 system_pods.go:89] "etcd-no-preload-406541" [32415a7e-882e-4c2f-b369-3841d4c57482] Running
	I1018 12:17:17.600125  284991 system_pods.go:89] "kindnet-dwg7c" [d2ecaa2c-b1fd-4635-8521-39461256e9ec] Running
	I1018 12:17:17.600129  284991 system_pods.go:89] "kube-apiserver-no-preload-406541" [179f86d1-c11f-42fb-821a-a7c4877492d3] Running
	I1018 12:17:17.600132  284991 system_pods.go:89] "kube-controller-manager-no-preload-406541" [092fc484-967e-4890-aa37-e52f994dfb9e] Running
	I1018 12:17:17.600135  284991 system_pods.go:89] "kube-proxy-9vbmr" [396c662e-9914-4ffe-a26e-4fff6e123577] Running
	I1018 12:17:17.600139  284991 system_pods.go:89] "kube-scheduler-no-preload-406541" [08ef79d5-dedd-4034-8278-ddd13a8a6dbd] Running
	I1018 12:17:17.600144  284991 system_pods.go:89] "storage-provisioner" [7c61b5da-ef85-46ff-a054-051967cf9d79] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:17.600157  284991 retry.go:31] will retry after 359.077664ms: missing components: kube-dns
	I1018 12:17:17.963458  284991 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:17.963491  284991 system_pods.go:89] "coredns-66bc5c9577-bwvrq" [eee9c519-7100-41a0-8a95-6daae8b6b46b] Running
	I1018 12:17:17.963500  284991 system_pods.go:89] "etcd-no-preload-406541" [32415a7e-882e-4c2f-b369-3841d4c57482] Running
	I1018 12:17:17.963505  284991 system_pods.go:89] "kindnet-dwg7c" [d2ecaa2c-b1fd-4635-8521-39461256e9ec] Running
	I1018 12:17:17.963510  284991 system_pods.go:89] "kube-apiserver-no-preload-406541" [179f86d1-c11f-42fb-821a-a7c4877492d3] Running
	I1018 12:17:17.963516  284991 system_pods.go:89] "kube-controller-manager-no-preload-406541" [092fc484-967e-4890-aa37-e52f994dfb9e] Running
	I1018 12:17:17.963521  284991 system_pods.go:89] "kube-proxy-9vbmr" [396c662e-9914-4ffe-a26e-4fff6e123577] Running
	I1018 12:17:17.963526  284991 system_pods.go:89] "kube-scheduler-no-preload-406541" [08ef79d5-dedd-4034-8278-ddd13a8a6dbd] Running
	I1018 12:17:17.963532  284991 system_pods.go:89] "storage-provisioner" [7c61b5da-ef85-46ff-a054-051967cf9d79] Running
	I1018 12:17:17.963543  284991 system_pods.go:126] duration metric: took 641.608816ms to wait for k8s-apps to be running ...
	I1018 12:17:17.963558  284991 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:17:17.963606  284991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:17:17.980464  284991 system_svc.go:56] duration metric: took 16.897132ms WaitForService to wait for kubelet
	I1018 12:17:17.980496  284991 kubeadm.go:586] duration metric: took 13.615118006s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:17.980520  284991 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:17:17.983782  284991 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:17:17.983813  284991 node_conditions.go:123] node cpu capacity is 8
	I1018 12:17:17.983830  284991 node_conditions.go:105] duration metric: took 3.303337ms to run NodePressure ...
	I1018 12:17:17.983845  284991 start.go:241] waiting for startup goroutines ...
	I1018 12:17:17.983859  284991 start.go:246] waiting for cluster config update ...
	I1018 12:17:17.983875  284991 start.go:255] writing updated cluster config ...
	I1018 12:17:17.984155  284991 ssh_runner.go:195] Run: rm -f paused
	I1018 12:17:17.988902  284991 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:17:17.992701  284991 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bwvrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:17.997229  284991 pod_ready.go:94] pod "coredns-66bc5c9577-bwvrq" is "Ready"
	I1018 12:17:17.997250  284991 pod_ready.go:86] duration metric: took 4.522372ms for pod "coredns-66bc5c9577-bwvrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:17.999467  284991 pod_ready.go:83] waiting for pod "etcd-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:18.003331  284991 pod_ready.go:94] pod "etcd-no-preload-406541" is "Ready"
	I1018 12:17:18.003351  284991 pod_ready.go:86] duration metric: took 3.86318ms for pod "etcd-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:18.005221  284991 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:18.008960  284991 pod_ready.go:94] pod "kube-apiserver-no-preload-406541" is "Ready"
	I1018 12:17:18.008978  284991 pod_ready.go:86] duration metric: took 3.740672ms for pod "kube-apiserver-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:18.010873  284991 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:18.394228  284991 pod_ready.go:94] pod "kube-controller-manager-no-preload-406541" is "Ready"
	I1018 12:17:18.394253  284991 pod_ready.go:86] duration metric: took 383.353644ms for pod "kube-controller-manager-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:18.593712  284991 pod_ready.go:83] waiting for pod "kube-proxy-9vbmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:18.992879  284991 pod_ready.go:94] pod "kube-proxy-9vbmr" is "Ready"
	I1018 12:17:18.992904  284991 pod_ready.go:86] duration metric: took 399.166244ms for pod "kube-proxy-9vbmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:15.497742  303392 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-028309:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.7314372s)
	I1018 12:17:15.497791  303392 kic.go:203] duration metric: took 4.731592001s to extract preloaded images to volume ...
	W1018 12:17:15.497875  303392 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 12:17:15.497913  303392 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 12:17:15.497958  303392 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 12:17:15.554503  303392 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-028309 --name default-k8s-diff-port-028309 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-028309 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-028309 --network default-k8s-diff-port-028309 --ip 192.168.103.2 --volume default-k8s-diff-port-028309:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 12:17:15.848403  303392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-028309 --format={{.State.Running}}
	I1018 12:17:15.868112  303392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-028309 --format={{.State.Status}}
	I1018 12:17:15.889538  303392 cli_runner.go:164] Run: docker exec default-k8s-diff-port-028309 stat /var/lib/dpkg/alternatives/iptables
	I1018 12:17:15.935717  303392 oci.go:144] the created container "default-k8s-diff-port-028309" has a running status.
	I1018 12:17:15.935747  303392 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/default-k8s-diff-port-028309/id_rsa...
	I1018 12:17:16.250940  303392 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-5865/.minikube/machines/default-k8s-diff-port-028309/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 12:17:16.282552  303392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-028309 --format={{.State.Status}}
	I1018 12:17:16.302191  303392 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 12:17:16.302212  303392 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-028309 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 12:17:16.355540  303392 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-028309 --format={{.State.Status}}
	I1018 12:17:16.376024  303392 machine.go:93] provisionDockerMachine start ...
	I1018 12:17:16.376112  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:16.395817  303392 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:16.396165  303392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1018 12:17:16.396187  303392 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:17:16.533433  303392 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-028309
	
	I1018 12:17:16.533460  303392 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-028309"
	I1018 12:17:16.533528  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:16.553156  303392 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:16.553400  303392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1018 12:17:16.553416  303392 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-028309 && echo "default-k8s-diff-port-028309" | sudo tee /etc/hostname
	I1018 12:17:16.707408  303392 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-028309
	
	I1018 12:17:16.707493  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:16.731704  303392 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:16.732025  303392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1018 12:17:16.732060  303392 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-028309' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-028309/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-028309' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:17:16.879824  303392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:17:16.879858  303392 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:17:16.879883  303392 ubuntu.go:190] setting up certificates
	I1018 12:17:16.879895  303392 provision.go:84] configureAuth start
	I1018 12:17:16.879956  303392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-028309
	I1018 12:17:16.901411  303392 provision.go:143] copyHostCerts
	I1018 12:17:16.901473  303392 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:17:16.901487  303392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:17:16.901580  303392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:17:16.902243  303392 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:17:16.902265  303392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:17:16.902330  303392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:17:16.902433  303392 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:17:16.902445  303392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:17:16.902486  303392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:17:16.902559  303392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-028309 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-028309 localhost minikube]
	I1018 12:17:17.475066  303392 provision.go:177] copyRemoteCerts
	I1018 12:17:17.475128  303392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:17:17.475162  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:17.493468  303392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/default-k8s-diff-port-028309/id_rsa Username:docker}
	I1018 12:17:17.592023  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:17:17.616593  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 12:17:17.639348  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 12:17:17.660022  303392 provision.go:87] duration metric: took 780.113558ms to configureAuth
	I1018 12:17:17.660047  303392 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:17:17.660222  303392 config.go:182] Loaded profile config "default-k8s-diff-port-028309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:17:17.660343  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:17.680521  303392 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:17.680804  303392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1018 12:17:17.680830  303392 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:17:17.945969  303392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:17:17.946001  303392 machine.go:96] duration metric: took 1.569952227s to provisionDockerMachine
	I1018 12:17:17.946014  303392 client.go:171] duration metric: took 7.734907093s to LocalClient.Create
	I1018 12:17:17.946036  303392 start.go:167] duration metric: took 7.734975287s to libmachine.API.Create "default-k8s-diff-port-028309"
	I1018 12:17:17.946046  303392 start.go:293] postStartSetup for "default-k8s-diff-port-028309" (driver="docker")
	I1018 12:17:17.946060  303392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:17:17.946122  303392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:17:17.946169  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:17.965880  303392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/default-k8s-diff-port-028309/id_rsa Username:docker}
	I1018 12:17:18.071011  303392 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:17:18.075228  303392 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:17:18.075259  303392 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:17:18.075273  303392 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:17:18.075336  303392 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:17:18.075446  303392 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:17:18.075579  303392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:17:18.086195  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:17:18.118836  303392 start.go:296] duration metric: took 172.773702ms for postStartSetup
	I1018 12:17:18.119235  303392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-028309
	I1018 12:17:18.143686  303392 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/config.json ...
	I1018 12:17:18.143973  303392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:17:18.144013  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:18.167444  303392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/default-k8s-diff-port-028309/id_rsa Username:docker}
	I1018 12:17:18.280503  303392 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:17:18.287114  303392 start.go:128] duration metric: took 8.078425s to createHost
	I1018 12:17:18.287143  303392 start.go:83] releasing machines lock for "default-k8s-diff-port-028309", held for 8.078526872s
	I1018 12:17:18.287216  303392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-028309
	I1018 12:17:18.311862  303392 ssh_runner.go:195] Run: cat /version.json
	I1018 12:17:18.311924  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:18.312047  303392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:17:18.312123  303392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:17:18.340687  303392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/default-k8s-diff-port-028309/id_rsa Username:docker}
	I1018 12:17:18.341063  303392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/default-k8s-diff-port-028309/id_rsa Username:docker}
	I1018 12:17:18.526742  303392 ssh_runner.go:195] Run: systemctl --version
	I1018 12:17:18.535153  303392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:17:18.574803  303392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:17:18.580562  303392 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:17:18.580621  303392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:17:18.611420  303392 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 12:17:18.611447  303392 start.go:495] detecting cgroup driver to use...
	I1018 12:17:18.611485  303392 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:17:18.611537  303392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:17:18.633596  303392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:17:18.648429  303392 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:17:18.648493  303392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:17:18.669800  303392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:17:18.694052  303392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:17:18.786920  303392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:17:18.883823  303392 docker.go:234] disabling docker service ...
	I1018 12:17:18.883890  303392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:17:18.903035  303392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:17:18.917073  303392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:17:19.005318  303392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:17:19.093575  303392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:17:19.106427  303392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:17:19.121279  303392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:17:19.121342  303392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:19.132559  303392 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:17:19.132631  303392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:19.142771  303392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:19.152185  303392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:19.161843  303392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:17:19.170940  303392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:19.180720  303392 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:19.195395  303392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:19.205123  303392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:17:19.213211  303392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:17:19.221422  303392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:19.307098  303392 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:17:19.419859  303392 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:17:19.419914  303392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:17:19.424208  303392 start.go:563] Will wait 60s for crictl version
	I1018 12:17:19.424278  303392 ssh_runner.go:195] Run: which crictl
	I1018 12:17:19.428097  303392 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:17:19.453439  303392 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:17:19.453523  303392 ssh_runner.go:195] Run: crio --version
	I1018 12:17:19.483426  303392 ssh_runner.go:195] Run: crio --version
	I1018 12:17:19.514194  303392 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:17:19.193332  284991 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:19.592940  284991 pod_ready.go:94] pod "kube-scheduler-no-preload-406541" is "Ready"
	I1018 12:17:19.592969  284991 pod_ready.go:86] duration metric: took 399.614368ms for pod "kube-scheduler-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:19.592984  284991 pod_ready.go:40] duration metric: took 1.604049633s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:17:19.645987  284991 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:17:19.647961  284991 out.go:179] * Done! kubectl is now configured to use "no-preload-406541" cluster and "default" namespace by default
	I1018 12:17:19.515505  303392 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-028309 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:17:19.532795  303392 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 12:17:19.537047  303392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:17:19.547362  303392 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-028309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-028309 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:17:19.547478  303392 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:17:19.547519  303392 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:17:19.580110  303392 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:17:19.580131  303392 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:17:19.580173  303392 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:17:19.607803  303392 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:17:19.607829  303392 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:17:19.607838  303392 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1018 12:17:19.607930  303392 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-028309 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-028309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:17:19.608029  303392 ssh_runner.go:195] Run: crio config
	I1018 12:17:19.663204  303392 cni.go:84] Creating CNI manager for ""
	I1018 12:17:19.663226  303392 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:17:19.663243  303392 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:17:19.663265  303392 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-028309 NodeName:default-k8s-diff-port-028309 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:17:19.663413  303392 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-028309"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:17:19.663471  303392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:17:19.673382  303392 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:17:19.673471  303392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:17:19.683728  303392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1018 12:17:19.699354  303392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:17:19.716134  303392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1018 12:17:19.730855  303392 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:17:19.735754  303392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:17:19.747568  303392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:19.844411  303392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:17:19.864357  303392 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309 for IP: 192.168.103.2
	I1018 12:17:19.864378  303392 certs.go:195] generating shared ca certs ...
	I1018 12:17:19.864400  303392 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:19.864544  303392 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:17:19.864596  303392 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:17:19.864608  303392 certs.go:257] generating profile certs ...
	I1018 12:17:19.864691  303392 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/client.key
	I1018 12:17:19.864708  303392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/client.crt with IP's: []
	I1018 12:17:18.713847  295702 addons.go:514] duration metric: took 541.005493ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 12:17:18.985588  295702 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-175371" context rescaled to 1 replicas
	W1018 12:17:20.485494  295702 node_ready.go:57] node "embed-certs-175371" has "Ready":"False" status (will retry)
	I1018 12:17:20.001925  303392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/client.crt ...
	I1018 12:17:20.001954  303392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/client.crt: {Name:mkfc7c92d5c8617f11f3cb6f25e639839f9b3da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:20.002145  303392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/client.key ...
	I1018 12:17:20.002161  303392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/client.key: {Name:mk6a3ace004c640b39e675820c1e21d364515530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:20.002284  303392 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/apiserver.key.b2f0a738
	I1018 12:17:20.002316  303392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/apiserver.crt.b2f0a738 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1018 12:17:20.236523  303392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/apiserver.crt.b2f0a738 ...
	I1018 12:17:20.236551  303392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/apiserver.crt.b2f0a738: {Name:mkfe88801a7afdcc31d97180b9daef631067925f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:20.236710  303392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/apiserver.key.b2f0a738 ...
	I1018 12:17:20.236723  303392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/apiserver.key.b2f0a738: {Name:mk6d14ca1f8992a6a21ad19663b388fcf6f28ac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:20.236808  303392 certs.go:382] copying /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/apiserver.crt.b2f0a738 -> /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/apiserver.crt
	I1018 12:17:20.236892  303392 certs.go:386] copying /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/apiserver.key.b2f0a738 -> /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/apiserver.key
	I1018 12:17:20.236948  303392 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/proxy-client.key
	I1018 12:17:20.236964  303392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/proxy-client.crt with IP's: []
	I1018 12:17:20.601277  303392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/proxy-client.crt ...
	I1018 12:17:20.601310  303392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/proxy-client.crt: {Name:mkf4c643bc26cc4f4ad0749b8465f5606990a8ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:20.601468  303392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/proxy-client.key ...
	I1018 12:17:20.601488  303392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/proxy-client.key: {Name:mk69bf567d22092a86f2cb74fbf7280c84eda0f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:20.601652  303392 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem (1338 bytes)
	W1018 12:17:20.601687  303392 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360_empty.pem, impossibly tiny 0 bytes
	I1018 12:17:20.601697  303392 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:17:20.601717  303392 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:17:20.601738  303392 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:17:20.601784  303392 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 12:17:20.601849  303392 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:17:20.602414  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:17:20.622077  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:17:20.639771  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:17:20.657194  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:17:20.674783  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 12:17:20.693110  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:17:20.711357  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:17:20.729219  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/default-k8s-diff-port-028309/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:17:20.747634  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /usr/share/ca-certificates/93602.pem (1708 bytes)
	I1018 12:17:20.767261  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:17:20.785214  303392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem --> /usr/share/ca-certificates/9360.pem (1338 bytes)
	I1018 12:17:20.803488  303392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:17:20.816475  303392 ssh_runner.go:195] Run: openssl version
	I1018 12:17:20.822804  303392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:17:20.831717  303392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:20.835745  303392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:20.835824  303392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:20.871552  303392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:17:20.881225  303392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9360.pem && ln -fs /usr/share/ca-certificates/9360.pem /etc/ssl/certs/9360.pem"
	I1018 12:17:20.889914  303392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9360.pem
	I1018 12:17:20.893810  303392 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9360.pem
	I1018 12:17:20.893859  303392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9360.pem
	I1018 12:17:20.929472  303392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9360.pem /etc/ssl/certs/51391683.0"
	I1018 12:17:20.938801  303392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93602.pem && ln -fs /usr/share/ca-certificates/93602.pem /etc/ssl/certs/93602.pem"
	I1018 12:17:20.947397  303392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93602.pem
	I1018 12:17:20.951158  303392 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/93602.pem
	I1018 12:17:20.951214  303392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93602.pem
	I1018 12:17:20.987218  303392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93602.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:17:20.996641  303392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:17:21.000791  303392 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:17:21.000842  303392 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-028309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-028309 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:17:21.000903  303392 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:17:21.000955  303392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:17:21.026536  303392 cri.go:89] found id: ""
	I1018 12:17:21.026598  303392 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:17:21.034709  303392 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:17:21.042872  303392 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 12:17:21.042919  303392 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 12:17:21.050616  303392 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:17:21.050633  303392 kubeadm.go:157] found existing configuration files:
	
	I1018 12:17:21.050667  303392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1018 12:17:21.058598  303392 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:17:21.058651  303392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:17:21.066027  303392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1018 12:17:21.073981  303392 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:17:21.074039  303392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:17:21.081720  303392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1018 12:17:21.089547  303392 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:17:21.089599  303392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:17:21.097568  303392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1018 12:17:21.105420  303392 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:17:21.105483  303392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:17:21.113224  303392 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 12:17:21.172364  303392 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 12:17:21.236126  303392 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1018 12:17:22.485639  295702 node_ready.go:57] node "embed-certs-175371" has "Ready":"False" status (will retry)
	W1018 12:17:24.486227  295702 node_ready.go:57] node "embed-certs-175371" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 18 12:17:17 no-preload-406541 crio[770]: time="2025-10-18T12:17:17.241383954Z" level=info msg="Starting container: dae44df4541dbe4cd958d670f30fde26a614d923050756970ee293a99d182ef7" id=8ae32a4a-05d6-4033-9fdd-136fcbd5a5e4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:17:17 no-preload-406541 crio[770]: time="2025-10-18T12:17:17.243054688Z" level=info msg="Started container" PID=2867 containerID=dae44df4541dbe4cd958d670f30fde26a614d923050756970ee293a99d182ef7 description=kube-system/coredns-66bc5c9577-bwvrq/coredns id=8ae32a4a-05d6-4033-9fdd-136fcbd5a5e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b908aefb013e5a1a0f89f13b4bb9ed33c952ff59d0850572dcc1802ed13a1eaf
	Oct 18 12:17:20 no-preload-406541 crio[770]: time="2025-10-18T12:17:20.115437738Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0d212534-579e-4bc2-be10-19743ce0e138 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:17:20 no-preload-406541 crio[770]: time="2025-10-18T12:17:20.115553563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:17:20 no-preload-406541 crio[770]: time="2025-10-18T12:17:20.120777731Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:dfcf29f55a0fd902682fdd06ae502291137e8b7715ce5bc3474e102465efaec6 UID:f4ad8cbc-03d3-4f16-ab03-49d332b6fff3 NetNS:/var/run/netns/c9f02edc-66df-44e3-9652-e601ce81e726 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aa40}] Aliases:map[]}"
	Oct 18 12:17:20 no-preload-406541 crio[770]: time="2025-10-18T12:17:20.120815199Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 12:17:20 no-preload-406541 crio[770]: time="2025-10-18T12:17:20.13131764Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:dfcf29f55a0fd902682fdd06ae502291137e8b7715ce5bc3474e102465efaec6 UID:f4ad8cbc-03d3-4f16-ab03-49d332b6fff3 NetNS:/var/run/netns/c9f02edc-66df-44e3-9652-e601ce81e726 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aa40}] Aliases:map[]}"
	Oct 18 12:17:20 no-preload-406541 crio[770]: time="2025-10-18T12:17:20.131452805Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 12:17:20 no-preload-406541 crio[770]: time="2025-10-18T12:17:20.132293936Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 12:17:20 no-preload-406541 crio[770]: time="2025-10-18T12:17:20.133156943Z" level=info msg="Ran pod sandbox dfcf29f55a0fd902682fdd06ae502291137e8b7715ce5bc3474e102465efaec6 with infra container: default/busybox/POD" id=0d212534-579e-4bc2-be10-19743ce0e138 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:17:20 no-preload-406541 crio[770]: time="2025-10-18T12:17:20.134465087Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d6c95f5b-ff2b-4f10-bb1a-96d4f40c7030 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:17:20 no-preload-406541 crio[770]: time="2025-10-18T12:17:20.134596919Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d6c95f5b-ff2b-4f10-bb1a-96d4f40c7030 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:17:20 no-preload-406541 crio[770]: time="2025-10-18T12:17:20.134631195Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d6c95f5b-ff2b-4f10-bb1a-96d4f40c7030 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:17:20 no-preload-406541 crio[770]: time="2025-10-18T12:17:20.135196237Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8d1f5611-7815-469e-90b8-3d1b00c7a45f name=/runtime.v1.ImageService/PullImage
	Oct 18 12:17:20 no-preload-406541 crio[770]: time="2025-10-18T12:17:20.136816529Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 12:17:21 no-preload-406541 crio[770]: time="2025-10-18T12:17:21.504268528Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=8d1f5611-7815-469e-90b8-3d1b00c7a45f name=/runtime.v1.ImageService/PullImage
	Oct 18 12:17:21 no-preload-406541 crio[770]: time="2025-10-18T12:17:21.504919898Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7b4d9ca7-a7a0-4cd9-bb4f-92219773c4b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:17:21 no-preload-406541 crio[770]: time="2025-10-18T12:17:21.506256943Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1c4fe391-8395-4987-89fa-4d195958a87b name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:17:21 no-preload-406541 crio[770]: time="2025-10-18T12:17:21.509556051Z" level=info msg="Creating container: default/busybox/busybox" id=a412dd25-9dc6-4bd4-abd0-3ab1ffabb024 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:17:21 no-preload-406541 crio[770]: time="2025-10-18T12:17:21.510414927Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:17:21 no-preload-406541 crio[770]: time="2025-10-18T12:17:21.514250003Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:17:21 no-preload-406541 crio[770]: time="2025-10-18T12:17:21.514732752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:17:21 no-preload-406541 crio[770]: time="2025-10-18T12:17:21.538088187Z" level=info msg="Created container 45e9aa6961d58adaef1a496ab5f13c5ac4e05c6187bc5157140d6169d751d361: default/busybox/busybox" id=a412dd25-9dc6-4bd4-abd0-3ab1ffabb024 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:17:21 no-preload-406541 crio[770]: time="2025-10-18T12:17:21.538799469Z" level=info msg="Starting container: 45e9aa6961d58adaef1a496ab5f13c5ac4e05c6187bc5157140d6169d751d361" id=f5a2db28-2885-4f76-b0fa-a701f46da631 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:17:21 no-preload-406541 crio[770]: time="2025-10-18T12:17:21.540926183Z" level=info msg="Started container" PID=2944 containerID=45e9aa6961d58adaef1a496ab5f13c5ac4e05c6187bc5157140d6169d751d361 description=default/busybox/busybox id=f5a2db28-2885-4f76-b0fa-a701f46da631 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dfcf29f55a0fd902682fdd06ae502291137e8b7715ce5bc3474e102465efaec6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	45e9aa6961d58       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   dfcf29f55a0fd       busybox                                     default
	dae44df4541db       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   b908aefb013e5       coredns-66bc5c9577-bwvrq                    kube-system
	f73bdfb336820       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   cdcee183d84cb       storage-provisioner                         kube-system
	b8a7910d1e254       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   4d7bb3fe4f4fb       kindnet-dwg7c                               kube-system
	c6baf0327eaeb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   00ff343f16fff       kube-proxy-9vbmr                            kube-system
	4bc1b34b358fe       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   b83c5e7a730e8       kube-apiserver-no-preload-406541            kube-system
	2d55732b2bb28       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   a769eeed1b6be       etcd-no-preload-406541                      kube-system
	54366a385307a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   45b2a4f6c49d5       kube-scheduler-no-preload-406541            kube-system
	79e75144ed5fb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   4d77bf20527f8       kube-controller-manager-no-preload-406541   kube-system
	
	
	==> coredns [dae44df4541dbe4cd958d670f30fde26a614d923050756970ee293a99d182ef7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52732 - 13605 "HINFO IN 2407125301149114411.167750720505525065. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.420316524s
	
	
	==> describe nodes <==
	Name:               no-preload-406541
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-406541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=no-preload-406541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_16_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:16:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-406541
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:17:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:17:16 +0000   Sat, 18 Oct 2025 12:16:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:17:16 +0000   Sat, 18 Oct 2025 12:16:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:17:16 +0000   Sat, 18 Oct 2025 12:16:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:17:16 +0000   Sat, 18 Oct 2025 12:17:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-406541
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                3289e84c-c9b3-408a-9f62-dbb3085e7d17
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-bwvrq                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-no-preload-406541                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-dwg7c                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-406541             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-406541    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-9vbmr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-406541             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node no-preload-406541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node no-preload-406541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node no-preload-406541 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node no-preload-406541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node no-preload-406541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node no-preload-406541 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node no-preload-406541 event: Registered Node no-preload-406541 in Controller
	  Normal  NodeReady                12s                kubelet          Node no-preload-406541 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +11.948953] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 93 07 de 40 6d 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 2f a5 3a 37 fc 08 06
	[  +0.204454] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 4b 47 1f ce e5 08 06
	[Oct18 12:16] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 88 62 1b dd a7 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 f1 aa 42 b3 1d 08 06
	[  +0.000901] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +26.035563] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 9e 15 3f 0e e1 08 06
	[  +0.000631] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 55 46 ae a1 7f 08 06
	[  +2.492998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 63 10 7e 7b f1 08 06
	[  +0.001695] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	[ +18.118461] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e eb 77 72 c6 18 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	
	
	==> etcd [2d55732b2bb288bb105e2d35e50bfef7e80fd556ea2329bd901d16ee2e18a7d8] <==
	{"level":"info","ts":"2025-10-18T12:16:55.607930Z","caller":"traceutil/trace.go:172","msg":"trace[1613772861] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-4fnsfgnkkt5pidvcyc2dmrdmsi; range_end:; response_count:0; response_revision:16; }","duration":"163.654471ms","start":"2025-10-18T12:16:55.444258Z","end":"2025-10-18T12:16:55.607912Z","steps":["trace[1613772861] 'agreement among raft nodes before linearized reading'  (duration: 161.555622ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:55.606401Z","caller":"traceutil/trace.go:172","msg":"trace[2096062635] transaction","detail":"{read_only:false; response_revision:18; number_of_response:1; }","duration":"207.716164ms","start":"2025-10-18T12:16:55.398672Z","end":"2025-10-18T12:16:55.606389Z","steps":["trace[2096062635] 'process raft request'  (duration: 207.636931ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:55.607012Z","caller":"traceutil/trace.go:172","msg":"trace[884865141] transaction","detail":"{read_only:false; response_revision:19; number_of_response:1; }","duration":"208.114102ms","start":"2025-10-18T12:16:55.398884Z","end":"2025-10-18T12:16:55.606998Z","steps":["trace[884865141] 'process raft request'  (duration: 207.481296ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:55.607105Z","caller":"traceutil/trace.go:172","msg":"trace[991545018] transaction","detail":"{read_only:false; response_revision:20; number_of_response:1; }","duration":"207.969959ms","start":"2025-10-18T12:16:55.399128Z","end":"2025-10-18T12:16:55.607098Z","steps":["trace[991545018] 'process raft request'  (duration: 207.306593ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:55.607129Z","caller":"traceutil/trace.go:172","msg":"trace[284946732] transaction","detail":"{read_only:false; response_revision:21; number_of_response:1; }","duration":"207.869656ms","start":"2025-10-18T12:16:55.399254Z","end":"2025-10-18T12:16:55.607124Z","steps":["trace[284946732] 'process raft request'  (duration: 207.235835ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:55.607145Z","caller":"traceutil/trace.go:172","msg":"trace[442806042] transaction","detail":"{read_only:false; response_revision:22; number_of_response:1; }","duration":"207.713119ms","start":"2025-10-18T12:16:55.399427Z","end":"2025-10-18T12:16:55.607140Z","steps":["trace[442806042] 'process raft request'  (duration: 207.087397ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:55.607161Z","caller":"traceutil/trace.go:172","msg":"trace[1655730969] transaction","detail":"{read_only:false; response_revision:23; number_of_response:1; }","duration":"206.659949ms","start":"2025-10-18T12:16:55.400497Z","end":"2025-10-18T12:16:55.607157Z","steps":["trace[1655730969] 'process raft request'  (duration: 206.048571ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:55.607408Z","caller":"traceutil/trace.go:172","msg":"trace[90105135] transaction","detail":"{read_only:false; response_revision:17; number_of_response:1; }","duration":"211.490645ms","start":"2025-10-18T12:16:55.395902Z","end":"2025-10-18T12:16:55.607393Z","steps":["trace[90105135] 'process raft request'  (duration: 209.922975ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:16:55.605915Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"169.298524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-18T12:16:55.610182Z","caller":"traceutil/trace.go:172","msg":"trace[2068399273] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:0; response_revision:16; }","duration":"173.562405ms","start":"2025-10-18T12:16:55.436595Z","end":"2025-10-18T12:16:55.610157Z","steps":["trace[2068399273] 'agreement among raft nodes before linearized reading'  (duration: 169.269977ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:55.955690Z","caller":"traceutil/trace.go:172","msg":"trace[994483540] transaction","detail":"{read_only:false; response_revision:40; number_of_response:1; }","duration":"312.64309ms","start":"2025-10-18T12:16:55.643028Z","end":"2025-10-18T12:16:55.955671Z","steps":["trace[994483540] 'process raft request'  (duration: 312.614773ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:16:55.956154Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:16:55.643016Z","time spent":"312.736487ms","remote":"127.0.0.1:48704","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":959,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.scheduling.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.scheduling.k8s.io\" value_size:886 >> failure:<>"}
	{"level":"info","ts":"2025-10-18T12:16:55.956373Z","caller":"traceutil/trace.go:172","msg":"trace[390632100] transaction","detail":"{read_only:false; response_revision:38; number_of_response:1; }","duration":"313.419145ms","start":"2025-10-18T12:16:55.642940Z","end":"2025-10-18T12:16:55.956359Z","steps":["trace[390632100] 'process raft request'  (duration: 312.63737ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:55.956407Z","caller":"traceutil/trace.go:172","msg":"trace[723326400] transaction","detail":"{read_only:false; response_revision:39; number_of_response:1; }","duration":"313.449861ms","start":"2025-10-18T12:16:55.642940Z","end":"2025-10-18T12:16:55.956390Z","steps":["trace[723326400] 'process raft request'  (duration: 312.679815ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:16:55.956432Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:16:55.642919Z","time spent":"313.483999ms","remote":"127.0.0.1:48704","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":953,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.resource.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.resource.k8s.io\" value_size:882 >> failure:<>"}
	{"level":"warn","ts":"2025-10-18T12:16:55.956446Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:16:55.642919Z","time spent":"313.508872ms","remote":"127.0.0.1:48704","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":983,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.rbac.authorization.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.rbac.authorization.k8s.io\" value_size:902 >> failure:<>"}
	{"level":"info","ts":"2025-10-18T12:16:55.956412Z","caller":"traceutil/trace.go:172","msg":"trace[1481500662] transaction","detail":"{read_only:false; response_revision:37; number_of_response:1; }","duration":"313.421038ms","start":"2025-10-18T12:16:55.642966Z","end":"2025-10-18T12:16:55.956387Z","steps":["trace[1481500662] 'process raft request'  (duration: 235.657828ms)","trace[1481500662] 'compare'  (duration: 76.812295ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T12:16:55.956518Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:16:55.642937Z","time spent":"313.565736ms","remote":"127.0.0.1:48448","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":701,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/prioritylevelconfigurations/workload-low\" mod_revision:0 > success:<request_put:<key:\"/registry/prioritylevelconfigurations/workload-low\" value_size:643 >> failure:<>"}
	{"level":"info","ts":"2025-10-18T12:16:55.978452Z","caller":"traceutil/trace.go:172","msg":"trace[282111497] transaction","detail":"{read_only:false; response_revision:42; number_of_response:1; }","duration":"286.51137ms","start":"2025-10-18T12:16:55.691922Z","end":"2025-10-18T12:16:55.978433Z","steps":["trace[282111497] 'process raft request'  (duration: 286.472154ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:55.978541Z","caller":"traceutil/trace.go:172","msg":"trace[1589067338] transaction","detail":"{read_only:false; response_revision:41; number_of_response:1; }","duration":"288.033764ms","start":"2025-10-18T12:16:55.690490Z","end":"2025-10-18T12:16:55.978524Z","steps":["trace[1589067338] 'process raft request'  (duration: 287.811154ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:16:56.146025Z","caller":"traceutil/trace.go:172","msg":"trace[1643640282] linearizableReadLoop","detail":"{readStateIndex:53; appliedIndex:53; }","duration":"119.38ms","start":"2025-10-18T12:16:56.026619Z","end":"2025-10-18T12:16:56.145999Z","steps":["trace[1643640282] 'read index received'  (duration: 119.3725ms)","trace[1643640282] 'applied index is now lower than readState.Index'  (duration: 6.301µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T12:16:56.200972Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"174.355232ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/no-preload-406541.186f950163fe890d\" limit:1 ","response":"range_response_count:1 size:713"}
	{"level":"info","ts":"2025-10-18T12:16:56.201111Z","caller":"traceutil/trace.go:172","msg":"trace[187490120] range","detail":"{range_begin:/registry/events/default/no-preload-406541.186f950163fe890d; range_end:; response_count:1; response_revision:48; }","duration":"174.519586ms","start":"2025-10-18T12:16:56.026573Z","end":"2025-10-18T12:16:56.201093Z","steps":["trace[187490120] 'agreement among raft nodes before linearized reading'  (duration: 119.506127ms)","trace[187490120] 'range keys from in-memory index tree'  (duration: 54.736753ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T12:16:56.201056Z","caller":"traceutil/trace.go:172","msg":"trace[704700851] transaction","detail":"{read_only:false; response_revision:49; number_of_response:1; }","duration":"180.894528ms","start":"2025-10-18T12:16:56.020140Z","end":"2025-10-18T12:16:56.201035Z","steps":["trace[704700851] 'process raft request'  (duration: 125.931871ms)","trace[704700851] 'compare'  (duration: 54.754819ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T12:16:56.201066Z","caller":"traceutil/trace.go:172","msg":"trace[2048332216] transaction","detail":"{read_only:false; response_revision:50; number_of_response:1; }","duration":"171.11256ms","start":"2025-10-18T12:16:56.029941Z","end":"2025-10-18T12:16:56.201053Z","steps":["trace[2048332216] 'process raft request'  (duration: 171.025599ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:17:28 up 59 min,  0 user,  load average: 6.06, 4.39, 2.59
	Linux no-preload-406541 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b8a7910d1e254a4c9ddf9dd63d897756890c32e8482a745f8a17abd1f4f5c87b] <==
	I1018 12:17:06.299726       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:17:06.300050       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 12:17:06.300206       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:17:06.300221       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:17:06.300240       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:17:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:17:06.503242       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:17:06.503277       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:17:06.503288       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:17:06.596912       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:17:06.904292       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:17:06.904322       1 metrics.go:72] Registering metrics
	I1018 12:17:06.904373       1 controller.go:711] "Syncing nftables rules"
	I1018 12:17:16.508870       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 12:17:16.508931       1 main.go:301] handling current node
	I1018 12:17:26.505855       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 12:17:26.505909       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4bc1b34b358fe3174c135bd300968b7f75d34698af632c61987413f2516d1285] <==
	I1018 12:16:55.251039       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:16:55.275810       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 12:16:55.383784       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 12:16:55.385956       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:16:55.602644       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:16:55.605305       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 12:16:55.615116       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:16:56.300711       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 12:16:56.312136       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 12:16:56.312158       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:16:57.355094       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:16:57.461063       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:16:57.566115       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 12:16:57.575606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1018 12:16:57.577461       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:16:57.584724       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:16:58.182649       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:16:58.775154       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:16:58.790138       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 12:16:58.799887       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:17:03.836841       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:17:03.844259       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:17:03.985150       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 12:17:04.288698       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1018 12:17:26.929129       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:34964: use of closed network connection
	
	
	==> kube-controller-manager [79e75144ed5fb1c1401435a4bd8e923804e70035eb31ec716886d98459504ec3] <==
	I1018 12:17:03.182062       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 12:17:03.182074       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 12:17:03.182099       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 12:17:03.182205       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 12:17:03.182238       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:17:03.182223       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 12:17:03.182305       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 12:17:03.182403       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:17:03.182427       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 12:17:03.182468       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 12:17:03.183492       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 12:17:03.185266       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 12:17:03.185368       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 12:17:03.185437       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 12:17:03.185480       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 12:17:03.185492       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 12:17:03.185499       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 12:17:03.186732       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 12:17:03.188186       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:17:03.189524       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 12:17:03.192243       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-406541" podCIDRs=["10.244.0.0/24"]
	I1018 12:17:03.197323       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 12:17:03.199566       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 12:17:03.203865       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:17:18.134044       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c6baf0327eaebe4b0694f1902976b3b191db2ec16085c1e7464859184d29fcb2] <==
	I1018 12:17:04.513020       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:17:04.589658       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:17:04.704357       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:17:04.704400       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 12:17:04.704492       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:17:04.793948       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:17:04.794095       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:17:04.812532       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:17:04.813711       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:17:04.813948       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:17:04.815949       1 config.go:200] "Starting service config controller"
	I1018 12:17:04.815969       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:17:04.815991       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:17:04.815997       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:17:04.816011       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:17:04.816016       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:17:04.816169       1 config.go:309] "Starting node config controller"
	I1018 12:17:04.816178       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:17:04.816186       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:17:04.916920       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:17:04.917369       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:17:04.917460       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [54366a385307a369f7cc03a9b876a11cae2a6005dcee7ba581be4ee84bf3d786] <==
	E1018 12:16:55.214550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:16:55.214565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:16:55.214636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:16:55.214637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:16:55.214694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:16:56.060276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:16:56.064214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:16:56.249215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:16:56.265159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:16:56.273214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:16:56.281370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:16:56.281480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:16:56.304072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:16:56.325790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:16:56.358237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:16:56.383957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:16:56.421921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:16:56.494109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:16:56.510877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:16:56.544441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:16:56.556081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:16:56.639144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:16:56.758445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:16:56.810057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1018 12:16:59.909477       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:16:59 no-preload-406541 kubelet[2264]: I1018 12:16:59.739123    2264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-406541" podStartSLOduration=3.739098126 podStartE2EDuration="3.739098126s" podCreationTimestamp="2025-10-18 12:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:16:59.718502266 +0000 UTC m=+1.164601899" watchObservedRunningTime="2025-10-18 12:16:59.739098126 +0000 UTC m=+1.185197761"
	Oct 18 12:16:59 no-preload-406541 kubelet[2264]: I1018 12:16:59.765679    2264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-406541" podStartSLOduration=3.765657911 podStartE2EDuration="3.765657911s" podCreationTimestamp="2025-10-18 12:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:16:59.739559176 +0000 UTC m=+1.185658807" watchObservedRunningTime="2025-10-18 12:16:59.765657911 +0000 UTC m=+1.211757539"
	Oct 18 12:16:59 no-preload-406541 kubelet[2264]: I1018 12:16:59.765982    2264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-406541" podStartSLOduration=1.765963913 podStartE2EDuration="1.765963913s" podCreationTimestamp="2025-10-18 12:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:16:59.765595818 +0000 UTC m=+1.211695449" watchObservedRunningTime="2025-10-18 12:16:59.765963913 +0000 UTC m=+1.212063549"
	Oct 18 12:16:59 no-preload-406541 kubelet[2264]: I1018 12:16:59.807742    2264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-406541" podStartSLOduration=1.8077219329999998 podStartE2EDuration="1.807721933s" podCreationTimestamp="2025-10-18 12:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:16:59.783255106 +0000 UTC m=+1.229354741" watchObservedRunningTime="2025-10-18 12:16:59.807721933 +0000 UTC m=+1.253821579"
	Oct 18 12:17:03 no-preload-406541 kubelet[2264]: I1018 12:17:03.196046    2264 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 12:17:03 no-preload-406541 kubelet[2264]: I1018 12:17:03.196786    2264 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 12:17:04 no-preload-406541 kubelet[2264]: I1018 12:17:04.075897    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/396c662e-9914-4ffe-a26e-4fff6e123577-lib-modules\") pod \"kube-proxy-9vbmr\" (UID: \"396c662e-9914-4ffe-a26e-4fff6e123577\") " pod="kube-system/kube-proxy-9vbmr"
	Oct 18 12:17:04 no-preload-406541 kubelet[2264]: I1018 12:17:04.075951    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2ecaa2c-b1fd-4635-8521-39461256e9ec-xtables-lock\") pod \"kindnet-dwg7c\" (UID: \"d2ecaa2c-b1fd-4635-8521-39461256e9ec\") " pod="kube-system/kindnet-dwg7c"
	Oct 18 12:17:04 no-preload-406541 kubelet[2264]: I1018 12:17:04.075983    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/396c662e-9914-4ffe-a26e-4fff6e123577-kube-proxy\") pod \"kube-proxy-9vbmr\" (UID: \"396c662e-9914-4ffe-a26e-4fff6e123577\") " pod="kube-system/kube-proxy-9vbmr"
	Oct 18 12:17:04 no-preload-406541 kubelet[2264]: I1018 12:17:04.076005    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/396c662e-9914-4ffe-a26e-4fff6e123577-xtables-lock\") pod \"kube-proxy-9vbmr\" (UID: \"396c662e-9914-4ffe-a26e-4fff6e123577\") " pod="kube-system/kube-proxy-9vbmr"
	Oct 18 12:17:04 no-preload-406541 kubelet[2264]: I1018 12:17:04.076117    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2ecaa2c-b1fd-4635-8521-39461256e9ec-lib-modules\") pod \"kindnet-dwg7c\" (UID: \"d2ecaa2c-b1fd-4635-8521-39461256e9ec\") " pod="kube-system/kindnet-dwg7c"
	Oct 18 12:17:04 no-preload-406541 kubelet[2264]: I1018 12:17:04.076165    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bp5l\" (UniqueName: \"kubernetes.io/projected/d2ecaa2c-b1fd-4635-8521-39461256e9ec-kube-api-access-7bp5l\") pod \"kindnet-dwg7c\" (UID: \"d2ecaa2c-b1fd-4635-8521-39461256e9ec\") " pod="kube-system/kindnet-dwg7c"
	Oct 18 12:17:04 no-preload-406541 kubelet[2264]: I1018 12:17:04.076199    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxvcj\" (UniqueName: \"kubernetes.io/projected/396c662e-9914-4ffe-a26e-4fff6e123577-kube-api-access-lxvcj\") pod \"kube-proxy-9vbmr\" (UID: \"396c662e-9914-4ffe-a26e-4fff6e123577\") " pod="kube-system/kube-proxy-9vbmr"
	Oct 18 12:17:04 no-preload-406541 kubelet[2264]: I1018 12:17:04.076225    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d2ecaa2c-b1fd-4635-8521-39461256e9ec-cni-cfg\") pod \"kindnet-dwg7c\" (UID: \"d2ecaa2c-b1fd-4635-8521-39461256e9ec\") " pod="kube-system/kindnet-dwg7c"
	Oct 18 12:17:06 no-preload-406541 kubelet[2264]: I1018 12:17:06.729592    2264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9vbmr" podStartSLOduration=3.729567169 podStartE2EDuration="3.729567169s" podCreationTimestamp="2025-10-18 12:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:04.724606061 +0000 UTC m=+6.170705697" watchObservedRunningTime="2025-10-18 12:17:06.729567169 +0000 UTC m=+8.175666811"
	Oct 18 12:17:08 no-preload-406541 kubelet[2264]: I1018 12:17:08.319490    2264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dwg7c" podStartSLOduration=3.614422734 podStartE2EDuration="5.319463873s" podCreationTimestamp="2025-10-18 12:17:03 +0000 UTC" firstStartedPulling="2025-10-18 12:17:04.331312062 +0000 UTC m=+5.777411677" lastFinishedPulling="2025-10-18 12:17:06.036353187 +0000 UTC m=+7.482452816" observedRunningTime="2025-10-18 12:17:06.733186587 +0000 UTC m=+8.179286223" watchObservedRunningTime="2025-10-18 12:17:08.319463873 +0000 UTC m=+9.765563509"
	Oct 18 12:17:16 no-preload-406541 kubelet[2264]: I1018 12:17:16.857254    2264 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 12:17:16 no-preload-406541 kubelet[2264]: I1018 12:17:16.973519    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjh7m\" (UniqueName: \"kubernetes.io/projected/7c61b5da-ef85-46ff-a054-051967cf9d79-kube-api-access-zjh7m\") pod \"storage-provisioner\" (UID: \"7c61b5da-ef85-46ff-a054-051967cf9d79\") " pod="kube-system/storage-provisioner"
	Oct 18 12:17:16 no-preload-406541 kubelet[2264]: I1018 12:17:16.973565    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eee9c519-7100-41a0-8a95-6daae8b6b46b-config-volume\") pod \"coredns-66bc5c9577-bwvrq\" (UID: \"eee9c519-7100-41a0-8a95-6daae8b6b46b\") " pod="kube-system/coredns-66bc5c9577-bwvrq"
	Oct 18 12:17:16 no-preload-406541 kubelet[2264]: I1018 12:17:16.973652    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwft8\" (UniqueName: \"kubernetes.io/projected/eee9c519-7100-41a0-8a95-6daae8b6b46b-kube-api-access-wwft8\") pod \"coredns-66bc5c9577-bwvrq\" (UID: \"eee9c519-7100-41a0-8a95-6daae8b6b46b\") " pod="kube-system/coredns-66bc5c9577-bwvrq"
	Oct 18 12:17:16 no-preload-406541 kubelet[2264]: I1018 12:17:16.973680    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7c61b5da-ef85-46ff-a054-051967cf9d79-tmp\") pod \"storage-provisioner\" (UID: \"7c61b5da-ef85-46ff-a054-051967cf9d79\") " pod="kube-system/storage-provisioner"
	Oct 18 12:17:17 no-preload-406541 kubelet[2264]: I1018 12:17:17.754182    2264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bwvrq" podStartSLOduration=13.754159788 podStartE2EDuration="13.754159788s" podCreationTimestamp="2025-10-18 12:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:17.753941295 +0000 UTC m=+19.200040930" watchObservedRunningTime="2025-10-18 12:17:17.754159788 +0000 UTC m=+19.200259616"
	Oct 18 12:17:17 no-preload-406541 kubelet[2264]: I1018 12:17:17.775150    2264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.775125775 podStartE2EDuration="12.775125775s" podCreationTimestamp="2025-10-18 12:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:17.764956631 +0000 UTC m=+19.211056257" watchObservedRunningTime="2025-10-18 12:17:17.775125775 +0000 UTC m=+19.221225412"
	Oct 18 12:17:19 no-preload-406541 kubelet[2264]: I1018 12:17:19.895926    2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx5lp\" (UniqueName: \"kubernetes.io/projected/f4ad8cbc-03d3-4f16-ab03-49d332b6fff3-kube-api-access-qx5lp\") pod \"busybox\" (UID: \"f4ad8cbc-03d3-4f16-ab03-49d332b6fff3\") " pod="default/busybox"
	Oct 18 12:17:21 no-preload-406541 kubelet[2264]: I1018 12:17:21.765591    2264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.394725798 podStartE2EDuration="2.765568688s" podCreationTimestamp="2025-10-18 12:17:19 +0000 UTC" firstStartedPulling="2025-10-18 12:17:20.134878564 +0000 UTC m=+21.580978183" lastFinishedPulling="2025-10-18 12:17:21.505721455 +0000 UTC m=+22.951821073" observedRunningTime="2025-10-18 12:17:21.765332763 +0000 UTC m=+23.211432399" watchObservedRunningTime="2025-10-18 12:17:21.765568688 +0000 UTC m=+23.211668324"
	
	
	==> storage-provisioner [f73bdfb336820f544495e69f8f84b55c865ef89d9512bc1563da859a53b35448] <==
	I1018 12:17:17.252477       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 12:17:17.260023       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 12:17:17.260099       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 12:17:17.262299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:17.268889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:17:17.269092       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 12:17:17.269247       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-406541_6de7835f-347e-42cb-927f-e2484225f0bf!
	I1018 12:17:17.269243       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bf0d3988-5bf7-437b-a187-0fa2d27fb75f", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-406541_6de7835f-347e-42cb-927f-e2484225f0bf became leader
	W1018 12:17:17.271223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:17.276696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:17:17.370193       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-406541_6de7835f-347e-42cb-927f-e2484225f0bf!
	W1018 12:17:19.279827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:19.284159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:21.287702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:21.292031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:23.295874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:23.300063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:25.303864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:25.308281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:27.312099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:27.316898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-406541 -n no-preload-406541
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-406541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-028309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-028309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (246.823956ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:17:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-028309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-028309 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-028309 describe deploy/metrics-server -n kube-system: exit status 1 (60.452105ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-028309 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-028309
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-028309:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63",
	        "Created": "2025-10-18T12:17:15.571662487Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304190,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:17:15.615647395Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63/hostname",
	        "HostsPath": "/var/lib/docker/containers/189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63/hosts",
	        "LogPath": "/var/lib/docker/containers/189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63/189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63-json.log",
	        "Name": "/default-k8s-diff-port-028309",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-028309:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-028309",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63",
	                "LowerDir": "/var/lib/docker/overlay2/7c3ff02d9edfcdd2a7ea282d3d34f3f417c0e8e17e7349aa6c54d520ceea71c4-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c3ff02d9edfcdd2a7ea282d3d34f3f417c0e8e17e7349aa6c54d520ceea71c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c3ff02d9edfcdd2a7ea282d3d34f3f417c0e8e17e7349aa6c54d520ceea71c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c3ff02d9edfcdd2a7ea282d3d34f3f417c0e8e17e7349aa6c54d520ceea71c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-028309",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-028309/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-028309",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-028309",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-028309",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9331e2a9a24a91234745a983b2136126b757ad9d0277054e268c95478728019a",
	            "SandboxKey": "/var/run/docker/netns/9331e2a9a24a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-028309": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:07:7e:15:45:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9cb7bc9061ba59e01198e7ea5f6cf6ddd6ba962ca18f957a0fbcc8a6c5eef0e9",
	                    "EndpointID": "fba3d0ca159ecd29ad938f0c9a7b26547d4e443d31a8b3b27e32ff82aea768e5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-028309",
	                        "189b5ecbc2d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-028309 -n default-k8s-diff-port-028309
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-028309 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-028309 logs -n 25: (1.324947003s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-376567 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ ssh     │ -p bridge-376567 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ ssh     │ -p bridge-376567 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo containerd config dump                                                                                                                                                                                                  │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo crio config                                                                                                                                                                                                             │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ delete  │ -p bridge-376567                                                                                                                                                                                                                              │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ delete  │ -p disable-driver-mounts-200198                                                                                                                                                                                                               │ disable-driver-mounts-200198 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-024443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p old-k8s-version-024443 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-406541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p no-preload-406541 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-024443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p old-k8s-version-024443 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-406541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p no-preload-406541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-028309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:17:45
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:17:45.818265  310517 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:17:45.818534  310517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:17:45.818545  310517 out.go:374] Setting ErrFile to fd 2...
	I1018 12:17:45.818549  310517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:17:45.818813  310517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:17:45.819346  310517 out.go:368] Setting JSON to false
	I1018 12:17:45.820567  310517 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3614,"bootTime":1760786252,"procs":386,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:17:45.820686  310517 start.go:141] virtualization: kvm guest
	I1018 12:17:45.822791  310517 out.go:179] * [no-preload-406541] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:17:45.824116  310517 notify.go:220] Checking for updates...
	I1018 12:17:45.824155  310517 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:17:45.825571  310517 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:17:45.826898  310517 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:17:45.828390  310517 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:17:45.829891  310517 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:17:45.831226  310517 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:17:45.832937  310517 config.go:182] Loaded profile config "no-preload-406541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:17:45.833485  310517 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:17:45.858009  310517 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:17:45.858151  310517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:17:45.918498  310517 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:17:45.906848188 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:17:45.918661  310517 docker.go:318] overlay module found
	I1018 12:17:45.920998  310517 out.go:179] * Using the docker driver based on existing profile
	I1018 12:17:45.922451  310517 start.go:305] selected driver: docker
	I1018 12:17:45.922486  310517 start.go:925] validating driver "docker" against &{Name:no-preload-406541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-406541 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:17:45.922591  310517 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:17:45.923204  310517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:17:45.980172  310517 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:17:45.968945214 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:17:45.980486  310517 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:45.980513  310517 cni.go:84] Creating CNI manager for ""
	I1018 12:17:45.980554  310517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:17:45.980590  310517 start.go:349] cluster config:
	{Name:no-preload-406541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-406541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:17:45.982504  310517 out.go:179] * Starting "no-preload-406541" primary control-plane node in "no-preload-406541" cluster
	I1018 12:17:45.984470  310517 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:17:45.985833  310517 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:17:45.986928  310517 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:17:45.986988  310517 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:17:45.987099  310517 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/config.json ...
	I1018 12:17:45.987161  310517 cache.go:107] acquiring lock: {Name:mk2851c90c3cee4b8dc905a54300119306c34425 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:45.987186  310517 cache.go:107] acquiring lock: {Name:mk7beac465d3e33866f36c7d2d6c2d5c7648cadc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:45.987187  310517 cache.go:107] acquiring lock: {Name:mk12378f271fac5391329588d22fd9f6b5f2efe9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:45.987245  310517 cache.go:107] acquiring lock: {Name:mkf899cc61754339eb7c16b16d780a0d64247c63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:45.987276  310517 cache.go:115] /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 12:17:45.987288  310517 cache.go:115] /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 12:17:45.987289  310517 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 121.364µs
	I1018 12:17:45.987274  310517 cache.go:115] /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 12:17:45.987298  310517 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 54.761µs
	I1018 12:17:45.987306  310517 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 12:17:45.987308  310517 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 12:17:45.987307  310517 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 135.39µs
	I1018 12:17:45.987274  310517 cache.go:107] acquiring lock: {Name:mkc51ddd9714d0bce2fec89ca6505008f746ff3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:45.987322  310517 cache.go:115] /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 12:17:45.987324  310517 cache.go:107] acquiring lock: {Name:mk96d90bcd247dcb2d931dae4c9362f05288238f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:45.987329  310517 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 180.853µs
	I1018 12:17:45.987345  310517 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 12:17:45.987322  310517 cache.go:107] acquiring lock: {Name:mk574d4568922c0dc77dc7227f9dde52e8f9b559 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:45.987360  310517 cache.go:115] /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1018 12:17:45.987368  310517 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 45.589µs
	I1018 12:17:45.987375  310517 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 12:17:45.987319  310517 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 12:17:45.987373  310517 cache.go:107] acquiring lock: {Name:mkd955903c0f718f7272b2c35c91d555532a9b1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:45.987420  310517 cache.go:115] /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 12:17:45.987439  310517 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 217.587µs
	I1018 12:17:45.987455  310517 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 12:17:45.987446  310517 cache.go:115] /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 12:17:45.987476  310517 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 194.761µs
	I1018 12:17:45.987480  310517 cache.go:115] /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 12:17:45.987488  310517 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 12:17:45.987495  310517 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 175.237µs
	I1018 12:17:45.987511  310517 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 12:17:45.987520  310517 cache.go:87] Successfully saved all images to host disk.
	I1018 12:17:46.008377  310517 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:17:46.008400  310517 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:17:46.008414  310517 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:17:46.008441  310517 start.go:360] acquireMachinesLock for no-preload-406541: {Name:mk0766028e9fb536dc77f73d30a9c9fc1a771d70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:46.008506  310517 start.go:364] duration metric: took 46.934µs to acquireMachinesLock for "no-preload-406541"
	I1018 12:17:46.008529  310517 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:17:46.008539  310517 fix.go:54] fixHost starting: 
	I1018 12:17:46.008842  310517 cli_runner.go:164] Run: docker container inspect no-preload-406541 --format={{.State.Status}}
	I1018 12:17:46.028023  310517 fix.go:112] recreateIfNeeded on no-preload-406541: state=Stopped err=<nil>
	W1018 12:17:46.028064  310517 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:17:43.005465  309793 out.go:252] * Restarting existing docker container for "old-k8s-version-024443" ...
	I1018 12:17:43.005538  309793 cli_runner.go:164] Run: docker start old-k8s-version-024443
	I1018 12:17:43.262721  309793 cli_runner.go:164] Run: docker container inspect old-k8s-version-024443 --format={{.State.Status}}
	I1018 12:17:43.281797  309793 kic.go:430] container "old-k8s-version-024443" state is running.
	I1018 12:17:43.282231  309793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-024443
	I1018 12:17:43.301262  309793 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443/config.json ...
	I1018 12:17:43.301521  309793 machine.go:93] provisionDockerMachine start ...
	I1018 12:17:43.301602  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:43.321409  309793 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:43.321666  309793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 12:17:43.321682  309793 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:17:43.322298  309793 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56892->127.0.0.1:33108: read: connection reset by peer
	I1018 12:17:46.463800  309793 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-024443
	
	I1018 12:17:46.463827  309793 ubuntu.go:182] provisioning hostname "old-k8s-version-024443"
	I1018 12:17:46.463875  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:46.483267  309793 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:46.483549  309793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 12:17:46.483573  309793 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-024443 && echo "old-k8s-version-024443" | sudo tee /etc/hostname
	I1018 12:17:46.634868  309793 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-024443
	
	I1018 12:17:46.634965  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:46.655200  309793 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:46.655507  309793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 12:17:46.655535  309793 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-024443' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-024443/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-024443' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:17:46.788444  309793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:17:46.788471  309793 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:17:46.788521  309793 ubuntu.go:190] setting up certificates
	I1018 12:17:46.788535  309793 provision.go:84] configureAuth start
	I1018 12:17:46.788589  309793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-024443
	I1018 12:17:46.806062  309793 provision.go:143] copyHostCerts
	I1018 12:17:46.806115  309793 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:17:46.806125  309793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:17:46.806195  309793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:17:46.806317  309793 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:17:46.806330  309793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:17:46.806357  309793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:17:46.806433  309793 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:17:46.806440  309793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:17:46.806463  309793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:17:46.806523  309793 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-024443 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-024443]
	I1018 12:17:47.384178  309793 provision.go:177] copyRemoteCerts
	I1018 12:17:47.384234  309793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:17:47.384267  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:47.402639  309793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:17:47.501579  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:17:47.519836  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 12:17:47.537654  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:17:47.555436  309793 provision.go:87] duration metric: took 766.883501ms to configureAuth
	I1018 12:17:47.555469  309793 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:17:47.555679  309793 config.go:182] Loaded profile config "old-k8s-version-024443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 12:17:47.555808  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:47.576349  309793 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:47.576603  309793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 12:17:47.576621  309793 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:17:47.887626  309793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:17:47.887664  309793 machine.go:96] duration metric: took 4.586119524s to provisionDockerMachine
	I1018 12:17:47.887677  309793 start.go:293] postStartSetup for "old-k8s-version-024443" (driver="docker")
	I1018 12:17:47.887689  309793 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:17:47.887791  309793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:17:47.887843  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:47.906882  309793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:17:48.005047  309793 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:17:48.008814  309793 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:17:48.008839  309793 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:17:48.008852  309793 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:17:48.008904  309793 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:17:48.009008  309793 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:17:48.009131  309793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:17:48.017240  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:17:48.035887  309793 start.go:296] duration metric: took 148.197454ms for postStartSetup
	I1018 12:17:48.035967  309793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:17:48.036009  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:48.054834  309793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:17:48.149141  309793 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:17:48.154007  309793 fix.go:56] duration metric: took 5.168121201s for fixHost
	I1018 12:17:48.154038  309793 start.go:83] releasing machines lock for "old-k8s-version-024443", held for 5.168177217s
	I1018 12:17:48.154126  309793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-024443
	I1018 12:17:48.173319  309793 ssh_runner.go:195] Run: cat /version.json
	I1018 12:17:48.173373  309793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:17:48.173422  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:48.173423  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:48.192911  309793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:17:48.193887  309793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:17:48.354736  309793 ssh_runner.go:195] Run: systemctl --version
	I1018 12:17:48.362750  309793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:17:48.401550  309793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:17:48.406989  309793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:17:48.407062  309793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:17:48.415599  309793 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:17:48.415624  309793 start.go:495] detecting cgroup driver to use...
	I1018 12:17:48.415659  309793 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:17:48.415701  309793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:17:48.431310  309793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:17:48.444921  309793 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:17:48.444986  309793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:17:48.460916  309793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:17:48.474427  309793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:17:48.559191  309793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:17:48.644895  309793 docker.go:234] disabling docker service ...
	I1018 12:17:48.644960  309793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:17:48.659881  309793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:17:48.674682  309793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:17:48.762387  309793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:17:48.842445  309793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:17:48.855257  309793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:17:48.870442  309793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 12:17:48.870509  309793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:48.879856  309793 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:17:48.879925  309793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:48.889083  309793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:48.898192  309793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:48.907723  309793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:17:48.916533  309793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:48.926511  309793 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:48.935628  309793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:48.945196  309793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:17:48.953082  309793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:17:48.961367  309793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:49.045719  309793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:17:49.159358  309793 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:17:49.159419  309793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:17:49.163614  309793 start.go:563] Will wait 60s for crictl version
	I1018 12:17:49.163679  309793 ssh_runner.go:195] Run: which crictl
	I1018 12:17:49.167344  309793 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:17:49.192247  309793 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:17:49.192325  309793 ssh_runner.go:195] Run: crio --version
	I1018 12:17:49.221474  309793 ssh_runner.go:195] Run: crio --version
	I1018 12:17:49.251652  309793 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	W1018 12:17:46.692698  303392 node_ready.go:57] node "default-k8s-diff-port-028309" has "Ready":"False" status (will retry)
	I1018 12:17:47.692824  303392 node_ready.go:49] node "default-k8s-diff-port-028309" is "Ready"
	I1018 12:17:47.692857  303392 node_ready.go:38] duration metric: took 12.003720394s for node "default-k8s-diff-port-028309" to be "Ready" ...
	I1018 12:17:47.692874  303392 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:17:47.692929  303392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:17:47.705357  303392 api_server.go:72] duration metric: took 12.286538652s to wait for apiserver process to appear ...
	I1018 12:17:47.705379  303392 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:17:47.705395  303392 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 12:17:47.710222  303392 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1018 12:17:47.711107  303392 api_server.go:141] control plane version: v1.34.1
	I1018 12:17:47.711130  303392 api_server.go:131] duration metric: took 5.745655ms to wait for apiserver health ...
	I1018 12:17:47.711140  303392 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:17:47.714331  303392 system_pods.go:59] 8 kube-system pods found
	I1018 12:17:47.714361  303392 system_pods.go:61] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:47.714368  303392 system_pods.go:61] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running
	I1018 12:17:47.714373  303392 system_pods.go:61] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:17:47.714377  303392 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running
	I1018 12:17:47.714380  303392 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running
	I1018 12:17:47.714384  303392 system_pods.go:61] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:17:47.714387  303392 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running
	I1018 12:17:47.714392  303392 system_pods.go:61] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:47.714401  303392 system_pods.go:74] duration metric: took 3.25643ms to wait for pod list to return data ...
	I1018 12:17:47.714409  303392 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:17:47.716820  303392 default_sa.go:45] found service account: "default"
	I1018 12:17:47.716836  303392 default_sa.go:55] duration metric: took 2.423051ms for default service account to be created ...
	I1018 12:17:47.716844  303392 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:17:47.719390  303392 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:47.719418  303392 system_pods.go:89] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:47.719427  303392 system_pods.go:89] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running
	I1018 12:17:47.719436  303392 system_pods.go:89] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:17:47.719442  303392 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running
	I1018 12:17:47.719450  303392 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running
	I1018 12:17:47.719463  303392 system_pods.go:89] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:17:47.719469  303392 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running
	I1018 12:17:47.719481  303392 system_pods.go:89] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:47.719504  303392 retry.go:31] will retry after 235.205246ms: missing components: kube-dns
	I1018 12:17:47.958395  303392 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:47.958430  303392 system_pods.go:89] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:47.958438  303392 system_pods.go:89] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running
	I1018 12:17:47.958445  303392 system_pods.go:89] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:17:47.958450  303392 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running
	I1018 12:17:47.958455  303392 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running
	I1018 12:17:47.958460  303392 system_pods.go:89] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:17:47.958466  303392 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running
	I1018 12:17:47.958473  303392 system_pods.go:89] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:47.958493  303392 retry.go:31] will retry after 235.162839ms: missing components: kube-dns
	I1018 12:17:48.197604  303392 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:48.197647  303392 system_pods.go:89] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:48.197657  303392 system_pods.go:89] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running
	I1018 12:17:48.197665  303392 system_pods.go:89] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:17:48.197671  303392 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running
	I1018 12:17:48.197676  303392 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running
	I1018 12:17:48.197689  303392 system_pods.go:89] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:17:48.197696  303392 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running
	I1018 12:17:48.197707  303392 system_pods.go:89] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:48.197730  303392 retry.go:31] will retry after 462.764ms: missing components: kube-dns
	I1018 12:17:48.665815  303392 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:48.665847  303392 system_pods.go:89] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Running
	I1018 12:17:48.665855  303392 system_pods.go:89] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running
	I1018 12:17:48.665861  303392 system_pods.go:89] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:17:48.665866  303392 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running
	I1018 12:17:48.665871  303392 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running
	I1018 12:17:48.665876  303392 system_pods.go:89] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:17:48.665882  303392 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running
	I1018 12:17:48.665887  303392 system_pods.go:89] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Running
	I1018 12:17:48.665898  303392 system_pods.go:126] duration metric: took 949.048167ms to wait for k8s-apps to be running ...
	I1018 12:17:48.665912  303392 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:17:48.665972  303392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:17:48.679470  303392 system_svc.go:56] duration metric: took 13.550292ms WaitForService to wait for kubelet
	I1018 12:17:48.679503  303392 kubeadm.go:586] duration metric: took 13.26068638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:48.679523  303392 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:17:48.682666  303392 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:17:48.682691  303392 node_conditions.go:123] node cpu capacity is 8
	I1018 12:17:48.682704  303392 node_conditions.go:105] duration metric: took 3.176473ms to run NodePressure ...
	I1018 12:17:48.682715  303392 start.go:241] waiting for startup goroutines ...
	I1018 12:17:48.682723  303392 start.go:246] waiting for cluster config update ...
	I1018 12:17:48.682735  303392 start.go:255] writing updated cluster config ...
	I1018 12:17:48.683022  303392 ssh_runner.go:195] Run: rm -f paused
	I1018 12:17:48.686875  303392 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:17:48.690618  303392 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7qgqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:48.695149  303392 pod_ready.go:94] pod "coredns-66bc5c9577-7qgqj" is "Ready"
	I1018 12:17:48.695177  303392 pod_ready.go:86] duration metric: took 4.535928ms for pod "coredns-66bc5c9577-7qgqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:48.697658  303392 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:48.702367  303392 pod_ready.go:94] pod "etcd-default-k8s-diff-port-028309" is "Ready"
	I1018 12:17:48.702388  303392 pod_ready.go:86] duration metric: took 4.706068ms for pod "etcd-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:48.704683  303392 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:48.713736  303392 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-028309" is "Ready"
	I1018 12:17:48.713782  303392 pod_ready.go:86] duration metric: took 9.071932ms for pod "kube-apiserver-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:48.716521  303392 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:49.091627  303392 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-028309" is "Ready"
	I1018 12:17:49.091653  303392 pod_ready.go:86] duration metric: took 375.10527ms for pod "kube-controller-manager-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:49.291903  303392 pod_ready.go:83] waiting for pod "kube-proxy-bffkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:49.691733  303392 pod_ready.go:94] pod "kube-proxy-bffkr" is "Ready"
	I1018 12:17:49.691780  303392 pod_ready.go:86] duration metric: took 399.85273ms for pod "kube-proxy-bffkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:49.892297  303392 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:50.291380  303392 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-028309" is "Ready"
	I1018 12:17:50.291413  303392 pod_ready.go:86] duration metric: took 399.08983ms for pod "kube-scheduler-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:50.291429  303392 pod_ready.go:40] duration metric: took 1.604526893s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:17:50.348944  303392 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:17:50.353333  303392 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-028309" cluster and "default" namespace by default
	I1018 12:17:49.253107  309793 cli_runner.go:164] Run: docker network inspect old-k8s-version-024443 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:17:49.270942  309793 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 12:17:49.275182  309793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:17:49.286027  309793 kubeadm.go:883] updating cluster {Name:old-k8s-version-024443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-024443 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:17:49.286182  309793 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 12:17:49.286226  309793 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:17:49.319603  309793 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:17:49.319623  309793 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:17:49.319666  309793 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:17:49.345865  309793 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:17:49.345892  309793 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:17:49.345902  309793 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1018 12:17:49.345988  309793 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-024443 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-024443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:17:49.346052  309793 ssh_runner.go:195] Run: crio config
	I1018 12:17:49.398407  309793 cni.go:84] Creating CNI manager for ""
	I1018 12:17:49.398428  309793 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:17:49.398444  309793 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:17:49.398467  309793 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-024443 NodeName:old-k8s-version-024443 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:17:49.398596  309793 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-024443"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:17:49.398652  309793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1018 12:17:49.407843  309793 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:17:49.407920  309793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:17:49.416414  309793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1018 12:17:49.430154  309793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:17:49.443468  309793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1018 12:17:49.456536  309793 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:17:49.460426  309793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:17:49.470456  309793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:49.552794  309793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:17:49.573678  309793 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443 for IP: 192.168.85.2
	I1018 12:17:49.573704  309793 certs.go:195] generating shared ca certs ...
	I1018 12:17:49.573726  309793 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:49.574000  309793 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:17:49.574063  309793 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:17:49.574077  309793 certs.go:257] generating profile certs ...
	I1018 12:17:49.574205  309793 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443/client.key
	I1018 12:17:49.574303  309793 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443/apiserver.key.40a89ae9
	I1018 12:17:49.574348  309793 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443/proxy-client.key
	I1018 12:17:49.574449  309793 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem (1338 bytes)
	W1018 12:17:49.574476  309793 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360_empty.pem, impossibly tiny 0 bytes
	I1018 12:17:49.574485  309793 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:17:49.574506  309793 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:17:49.574528  309793 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:17:49.574547  309793 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 12:17:49.574584  309793 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:17:49.575220  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:17:49.595131  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:17:49.615276  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:17:49.636377  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:17:49.660922  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 12:17:49.685225  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:17:49.705144  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:17:49.725305  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:17:49.745531  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:17:49.766346  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem --> /usr/share/ca-certificates/9360.pem (1338 bytes)
	I1018 12:17:49.787134  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /usr/share/ca-certificates/93602.pem (1708 bytes)
	I1018 12:17:49.806241  309793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:17:49.819877  309793 ssh_runner.go:195] Run: openssl version
	I1018 12:17:49.827197  309793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:17:49.837292  309793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:49.841647  309793 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:49.841706  309793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:49.882591  309793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:17:49.891421  309793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9360.pem && ln -fs /usr/share/ca-certificates/9360.pem /etc/ssl/certs/9360.pem"
	I1018 12:17:49.900888  309793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9360.pem
	I1018 12:17:49.905260  309793 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9360.pem
	I1018 12:17:49.905326  309793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9360.pem
	I1018 12:17:49.943114  309793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9360.pem /etc/ssl/certs/51391683.0"
	I1018 12:17:49.952744  309793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93602.pem && ln -fs /usr/share/ca-certificates/93602.pem /etc/ssl/certs/93602.pem"
	I1018 12:17:49.962938  309793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93602.pem
	I1018 12:17:49.966930  309793 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/93602.pem
	I1018 12:17:49.966991  309793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93602.pem
	I1018 12:17:50.003652  309793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93602.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:17:50.012856  309793 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:17:50.017068  309793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:17:50.054430  309793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:17:50.097562  309793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:17:50.143080  309793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:17:50.189734  309793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:17:50.248940  309793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:17:50.301380  309793 kubeadm.go:400] StartCluster: {Name:old-k8s-version-024443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-024443 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:17:50.301490  309793 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:17:50.301551  309793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:17:50.340573  309793 cri.go:89] found id: "c1618cf2491e60c5f264f84236c3e565212efb40b779ad4dfc51547e5f21be79"
	I1018 12:17:50.340602  309793 cri.go:89] found id: "b9fd7b97fe26af7875425214d9a97dc3856195255cc6b76a7313c710605084a3"
	I1018 12:17:50.340608  309793 cri.go:89] found id: "c664320629fb594f08d0b5b11b435430f4ed28eaed8d94b8f5952428aa171a2f"
	I1018 12:17:50.340613  309793 cri.go:89] found id: "cd847940cd839a77a7dd6283540c50c9b5c0f1ec5b64bfe2ed49728cb0998923"
	I1018 12:17:50.340617  309793 cri.go:89] found id: ""
	I1018 12:17:50.340989  309793 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 12:17:50.357230  309793 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:17:50Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:17:50.357305  309793 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:17:50.367509  309793 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 12:17:50.367534  309793 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 12:17:50.367615  309793 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 12:17:50.378221  309793 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:17:50.379393  309793 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-024443" does not appear in /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:17:50.380074  309793 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-5865/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-024443" cluster setting kubeconfig missing "old-k8s-version-024443" context setting]
	I1018 12:17:50.380999  309793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:50.382855  309793 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 12:17:50.392271  309793 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 12:17:50.392309  309793 kubeadm.go:601] duration metric: took 24.768829ms to restartPrimaryControlPlane
	I1018 12:17:50.392321  309793 kubeadm.go:402] duration metric: took 90.950451ms to StartCluster
	I1018 12:17:50.392339  309793 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:50.392392  309793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:17:50.394423  309793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:50.394689  309793 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:17:50.394877  309793 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:17:50.394965  309793 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-024443"
	I1018 12:17:50.394965  309793 config.go:182] Loaded profile config "old-k8s-version-024443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 12:17:50.394990  309793 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-024443"
	W1018 12:17:50.394999  309793 addons.go:247] addon storage-provisioner should already be in state true
	I1018 12:17:50.395011  309793 addons.go:69] Setting dashboard=true in profile "old-k8s-version-024443"
	I1018 12:17:50.395024  309793 host.go:66] Checking if "old-k8s-version-024443" exists ...
	I1018 12:17:50.395025  309793 addons.go:238] Setting addon dashboard=true in "old-k8s-version-024443"
	W1018 12:17:50.395035  309793 addons.go:247] addon dashboard should already be in state true
	I1018 12:17:50.395059  309793 host.go:66] Checking if "old-k8s-version-024443" exists ...
	I1018 12:17:50.395077  309793 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-024443"
	I1018 12:17:50.395096  309793 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-024443"
	I1018 12:17:50.395386  309793 cli_runner.go:164] Run: docker container inspect old-k8s-version-024443 --format={{.State.Status}}
	I1018 12:17:50.395576  309793 cli_runner.go:164] Run: docker container inspect old-k8s-version-024443 --format={{.State.Status}}
	I1018 12:17:50.395883  309793 cli_runner.go:164] Run: docker container inspect old-k8s-version-024443 --format={{.State.Status}}
	I1018 12:17:50.400893  309793 out.go:179] * Verifying Kubernetes components...
	I1018 12:17:50.402806  309793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:50.432834  309793 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:17:50.433968  309793 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-024443"
	W1018 12:17:50.434047  309793 addons.go:247] addon default-storageclass should already be in state true
	I1018 12:17:50.434111  309793 host.go:66] Checking if "old-k8s-version-024443" exists ...
	I1018 12:17:50.434428  309793 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:17:50.434457  309793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:17:50.434519  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:50.435101  309793 cli_runner.go:164] Run: docker container inspect old-k8s-version-024443 --format={{.State.Status}}
	I1018 12:17:50.438201  309793 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 12:17:50.439409  309793 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1018 12:17:46.486330  295702 node_ready.go:57] node "embed-certs-175371" has "Ready":"False" status (will retry)
	W1018 12:17:48.985939  295702 node_ready.go:57] node "embed-certs-175371" has "Ready":"False" status (will retry)
	I1018 12:17:46.029837  310517 out.go:252] * Restarting existing docker container for "no-preload-406541" ...
	I1018 12:17:46.029917  310517 cli_runner.go:164] Run: docker start no-preload-406541
	I1018 12:17:46.292072  310517 cli_runner.go:164] Run: docker container inspect no-preload-406541 --format={{.State.Status}}
	I1018 12:17:46.312729  310517 kic.go:430] container "no-preload-406541" state is running.
	I1018 12:17:46.313203  310517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-406541
	I1018 12:17:46.334301  310517 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/config.json ...
	I1018 12:17:46.334550  310517 machine.go:93] provisionDockerMachine start ...
	I1018 12:17:46.334625  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:46.355571  310517 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:46.355816  310517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 12:17:46.355831  310517 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:17:46.356532  310517 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40868->127.0.0.1:33113: read: connection reset by peer
	I1018 12:17:49.498107  310517 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-406541
	
	I1018 12:17:49.498139  310517 ubuntu.go:182] provisioning hostname "no-preload-406541"
	I1018 12:17:49.498216  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:49.522328  310517 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:49.522570  310517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 12:17:49.522585  310517 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-406541 && echo "no-preload-406541" | sudo tee /etc/hostname
	I1018 12:17:49.672945  310517 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-406541
	
	I1018 12:17:49.673079  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:49.694618  310517 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:49.694858  310517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 12:17:49.694877  310517 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-406541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-406541/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-406541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:17:49.833408  310517 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:17:49.833445  310517 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:17:49.833506  310517 ubuntu.go:190] setting up certificates
	I1018 12:17:49.833526  310517 provision.go:84] configureAuth start
	I1018 12:17:49.833597  310517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-406541
	I1018 12:17:49.853415  310517 provision.go:143] copyHostCerts
	I1018 12:17:49.853475  310517 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:17:49.853499  310517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:17:49.853580  310517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:17:49.853696  310517 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:17:49.853709  310517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:17:49.853751  310517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:17:49.853857  310517 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:17:49.853871  310517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:17:49.853908  310517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:17:49.853979  310517 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.no-preload-406541 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-406541]
	I1018 12:17:50.440481  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 12:17:50.440498  309793 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 12:17:50.440555  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:50.471267  309793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:17:50.473997  309793 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:17:50.474041  309793 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:17:50.474133  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:50.481664  309793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:17:50.506684  309793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:17:50.594327  309793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:17:50.612619  309793 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-024443" to be "Ready" ...
	I1018 12:17:50.615556  309793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:17:50.624079  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 12:17:50.624103  309793 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 12:17:50.640897  309793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:17:50.646776  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 12:17:50.646802  309793 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 12:17:50.677507  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 12:17:50.677533  309793 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 12:17:50.698558  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 12:17:50.698586  309793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 12:17:50.717037  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 12:17:50.717067  309793 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 12:17:50.737193  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 12:17:50.737216  309793 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 12:17:50.755325  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 12:17:50.755350  309793 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 12:17:50.769185  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 12:17:50.769212  309793 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 12:17:50.783320  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:17:50.783347  309793 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 12:17:50.798045  309793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:17:51.016379  310517 provision.go:177] copyRemoteCerts
	I1018 12:17:51.016450  310517 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:17:51.016487  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:51.036946  310517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:17:51.136726  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:17:51.155743  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 12:17:51.176377  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 12:17:51.195810  310517 provision.go:87] duration metric: took 1.362266572s to configureAuth
	I1018 12:17:51.195837  310517 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:17:51.196034  310517 config.go:182] Loaded profile config "no-preload-406541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:17:51.196137  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:51.215756  310517 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:51.216008  310517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 12:17:51.216026  310517 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:17:51.522495  310517 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:17:51.522526  310517 machine.go:96] duration metric: took 5.187956853s to provisionDockerMachine
	I1018 12:17:51.522539  310517 start.go:293] postStartSetup for "no-preload-406541" (driver="docker")
	I1018 12:17:51.522554  310517 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:17:51.522617  310517 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:17:51.522661  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:51.544856  310517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:17:51.647828  310517 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:17:51.651575  310517 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:17:51.651603  310517 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:17:51.651614  310517 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:17:51.651671  310517 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:17:51.651740  310517 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:17:51.651874  310517 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:17:51.660448  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:17:51.679182  310517 start.go:296] duration metric: took 156.627397ms for postStartSetup
	I1018 12:17:51.679256  310517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:17:51.679298  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:51.698458  310517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:17:51.793433  310517 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:17:51.798480  310517 fix.go:56] duration metric: took 5.789933491s for fixHost
	I1018 12:17:51.798511  310517 start.go:83] releasing machines lock for "no-preload-406541", held for 5.789991279s
	I1018 12:17:51.798584  310517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-406541
	I1018 12:17:51.816606  310517 ssh_runner.go:195] Run: cat /version.json
	I1018 12:17:51.816625  310517 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:17:51.816658  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:51.816675  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:51.835906  310517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:17:51.836069  310517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:17:51.992984  310517 ssh_runner.go:195] Run: systemctl --version
	I1018 12:17:52.000371  310517 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:17:52.042608  310517 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:17:52.048811  310517 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:17:52.048884  310517 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:17:52.058459  310517 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:17:52.058487  310517 start.go:495] detecting cgroup driver to use...
	I1018 12:17:52.058516  310517 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:17:52.058562  310517 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:17:52.075638  310517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:17:52.091731  310517 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:17:52.091834  310517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:17:52.110791  310517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:17:52.127170  310517 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:17:52.230093  310517 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:17:52.341976  310517 docker.go:234] disabling docker service ...
	I1018 12:17:52.342043  310517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:17:52.359910  310517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:17:52.375430  310517 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:17:52.469889  310517 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:17:52.563511  310517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:17:52.579096  310517 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:17:52.594906  310517 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:17:52.594969  310517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:52.605127  310517 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:17:52.605201  310517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:52.615031  310517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:52.628121  310517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:52.638844  310517 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:17:52.648105  310517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:52.658328  310517 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:52.667871  310517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:52.677553  310517 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:17:52.685836  310517 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:17:52.694567  310517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:52.792011  310517 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:17:52.939411  310517 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:17:52.939478  310517 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:17:52.943888  310517 start.go:563] Will wait 60s for crictl version
	I1018 12:17:52.943953  310517 ssh_runner.go:195] Run: which crictl
	I1018 12:17:52.948811  310517 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:17:52.981686  310517 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:17:52.981782  310517 ssh_runner.go:195] Run: crio --version
	I1018 12:17:53.012712  310517 ssh_runner.go:195] Run: crio --version
	I1018 12:17:53.065174  310517 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:17:53.045966  309793 node_ready.go:49] node "old-k8s-version-024443" is "Ready"
	I1018 12:17:53.046002  309793 node_ready.go:38] duration metric: took 2.433336279s for node "old-k8s-version-024443" to be "Ready" ...
	I1018 12:17:53.046019  309793 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:17:53.046072  309793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:17:53.784407  309793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.168814086s)
	I1018 12:17:53.784417  309793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.143486767s)
	I1018 12:17:54.324158  309793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.526042493s)
	I1018 12:17:54.325032  309793 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.278943628s)
	I1018 12:17:54.325076  309793 api_server.go:72] duration metric: took 3.930353705s to wait for apiserver process to appear ...
	I1018 12:17:54.325083  309793 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:17:54.325101  309793 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:17:54.327905  309793 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-024443 addons enable metrics-server
	
	I1018 12:17:54.329691  309793 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1018 12:17:53.066489  310517 cli_runner.go:164] Run: docker network inspect no-preload-406541 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:17:53.089888  310517 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 12:17:53.094609  310517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:17:53.111803  310517 kubeadm.go:883] updating cluster {Name:no-preload-406541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-406541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:17:53.111946  310517 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:17:53.112010  310517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:17:53.150660  310517 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:17:53.150683  310517 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:17:53.150690  310517 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1018 12:17:53.150808  310517 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-406541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-406541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:17:53.150893  310517 ssh_runner.go:195] Run: crio config
	I1018 12:17:53.204319  310517 cni.go:84] Creating CNI manager for ""
	I1018 12:17:53.204355  310517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:17:53.204376  310517 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:17:53.204405  310517 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-406541 NodeName:no-preload-406541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:17:53.204562  310517 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-406541"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:17:53.204633  310517 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:17:53.215460  310517 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:17:53.215537  310517 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:17:53.224850  310517 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 12:17:53.240461  310517 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:17:53.261283  310517 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1018 12:17:53.277344  310517 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:17:53.281549  310517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:17:53.292682  310517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:53.396838  310517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:17:53.418362  310517 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541 for IP: 192.168.94.2
	I1018 12:17:53.418391  310517 certs.go:195] generating shared ca certs ...
	I1018 12:17:53.418414  310517 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:53.418584  310517 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:17:53.418650  310517 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:17:53.418668  310517 certs.go:257] generating profile certs ...
	I1018 12:17:53.418799  310517 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/client.key
	I1018 12:17:53.418882  310517 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/apiserver.key.4f4cf101
	I1018 12:17:53.418928  310517 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/proxy-client.key
	I1018 12:17:53.419104  310517 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem (1338 bytes)
	W1018 12:17:53.419149  310517 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360_empty.pem, impossibly tiny 0 bytes
	I1018 12:17:53.419161  310517 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:17:53.419188  310517 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:17:53.419218  310517 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:17:53.419250  310517 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 12:17:53.419302  310517 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:17:53.420113  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:17:53.441462  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:17:53.461597  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:17:53.484380  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:17:53.522157  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 12:17:53.547074  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:17:53.574502  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:17:53.595620  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:17:53.615749  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /usr/share/ca-certificates/93602.pem (1708 bytes)
	I1018 12:17:53.640103  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:17:53.662488  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem --> /usr/share/ca-certificates/9360.pem (1338 bytes)
	I1018 12:17:53.685642  310517 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:17:53.701661  310517 ssh_runner.go:195] Run: openssl version
	I1018 12:17:53.710140  310517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9360.pem && ln -fs /usr/share/ca-certificates/9360.pem /etc/ssl/certs/9360.pem"
	I1018 12:17:53.722521  310517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9360.pem
	I1018 12:17:53.727297  310517 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9360.pem
	I1018 12:17:53.727357  310517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9360.pem
	I1018 12:17:53.777720  310517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9360.pem /etc/ssl/certs/51391683.0"
	I1018 12:17:53.788688  310517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93602.pem && ln -fs /usr/share/ca-certificates/93602.pem /etc/ssl/certs/93602.pem"
	I1018 12:17:53.801703  310517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93602.pem
	I1018 12:17:53.809690  310517 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/93602.pem
	I1018 12:17:53.809779  310517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93602.pem
	I1018 12:17:53.850035  310517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93602.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:17:53.861385  310517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:17:53.871682  310517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:53.876219  310517 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:53.876284  310517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:53.914881  310517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:17:53.925639  310517 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:17:53.930104  310517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:17:53.983731  310517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:17:54.050477  310517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:17:54.116416  310517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:17:54.181269  310517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:17:54.244500  310517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:17:54.302454  310517 kubeadm.go:400] StartCluster: {Name:no-preload-406541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-406541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:17:54.302534  310517 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:17:54.302581  310517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:17:54.347167  310517 cri.go:89] found id: "5d618e751f9ba92d0e9b73cc902c60091fa7fc312b17c0a534306ddf5267331e"
	I1018 12:17:54.347193  310517 cri.go:89] found id: "133fd0664569cae2a09912a39da9ebed72def40b96fa66996c7f6cbd105deba3"
	I1018 12:17:54.347199  310517 cri.go:89] found id: "37d2f600fcf0c009e16115908271757cab49845434c4b2db0ade3132da9aaff7"
	I1018 12:17:54.347203  310517 cri.go:89] found id: "786f9a8bc0ec93e60a032d4b983f3c3c2cd05a95a06cfa33a7e7a81ed64a5f13"
	I1018 12:17:54.347207  310517 cri.go:89] found id: ""
	I1018 12:17:54.347261  310517 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 12:17:54.365891  310517 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:17:54Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:17:54.366004  310517 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:17:54.379456  310517 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 12:17:54.379483  310517 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 12:17:54.379530  310517 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 12:17:54.390456  310517 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:17:54.391845  310517 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-406541" does not appear in /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:17:54.392750  310517 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-5865/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-406541" cluster setting kubeconfig missing "no-preload-406541" context setting]
	I1018 12:17:54.394396  310517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:54.397106  310517 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 12:17:54.408092  310517 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1018 12:17:54.408143  310517 kubeadm.go:601] duration metric: took 28.647208ms to restartPrimaryControlPlane
	I1018 12:17:54.408155  310517 kubeadm.go:402] duration metric: took 105.709981ms to StartCluster
	I1018 12:17:54.408175  310517 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:54.408260  310517 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:17:54.410019  310517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:54.410279  310517 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:17:54.410342  310517 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:17:54.410450  310517 addons.go:69] Setting storage-provisioner=true in profile "no-preload-406541"
	I1018 12:17:54.410461  310517 addons.go:69] Setting dashboard=true in profile "no-preload-406541"
	I1018 12:17:54.410473  310517 addons.go:238] Setting addon storage-provisioner=true in "no-preload-406541"
	W1018 12:17:54.410482  310517 addons.go:247] addon storage-provisioner should already be in state true
	I1018 12:17:54.410486  310517 addons.go:238] Setting addon dashboard=true in "no-preload-406541"
	W1018 12:17:54.410495  310517 addons.go:247] addon dashboard should already be in state true
	I1018 12:17:54.410491  310517 addons.go:69] Setting default-storageclass=true in profile "no-preload-406541"
	I1018 12:17:54.410513  310517 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-406541"
	I1018 12:17:54.410522  310517 host.go:66] Checking if "no-preload-406541" exists ...
	I1018 12:17:54.410559  310517 host.go:66] Checking if "no-preload-406541" exists ...
	I1018 12:17:54.410874  310517 cli_runner.go:164] Run: docker container inspect no-preload-406541 --format={{.State.Status}}
	I1018 12:17:54.410511  310517 config.go:182] Loaded profile config "no-preload-406541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:17:54.411038  310517 cli_runner.go:164] Run: docker container inspect no-preload-406541 --format={{.State.Status}}
	I1018 12:17:54.411137  310517 cli_runner.go:164] Run: docker container inspect no-preload-406541 --format={{.State.Status}}
	I1018 12:17:54.412688  310517 out.go:179] * Verifying Kubernetes components...
	I1018 12:17:54.414332  310517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:54.443523  310517 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 12:17:54.444965  310517 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 12:17:54.446231  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 12:17:54.446264  310517 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 12:17:54.446237  310517 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1018 12:17:50.986964  295702 node_ready.go:57] node "embed-certs-175371" has "Ready":"False" status (will retry)
	W1018 12:17:53.485593  295702 node_ready.go:57] node "embed-certs-175371" has "Ready":"False" status (will retry)
	W1018 12:17:55.491134  295702 node_ready.go:57] node "embed-certs-175371" has "Ready":"False" status (will retry)
	I1018 12:17:54.446322  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:54.447508  310517 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:17:54.447558  310517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:17:54.447622  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:54.448174  310517 addons.go:238] Setting addon default-storageclass=true in "no-preload-406541"
	W1018 12:17:54.448200  310517 addons.go:247] addon default-storageclass should already be in state true
	I1018 12:17:54.448229  310517 host.go:66] Checking if "no-preload-406541" exists ...
	I1018 12:17:54.448712  310517 cli_runner.go:164] Run: docker container inspect no-preload-406541 --format={{.State.Status}}
	I1018 12:17:54.482549  310517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:17:54.488303  310517 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:17:54.488381  310517 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:17:54.488468  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:54.489309  310517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:17:54.516388  310517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:17:54.583220  310517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:17:54.597546  310517 node_ready.go:35] waiting up to 6m0s for node "no-preload-406541" to be "Ready" ...
	I1018 12:17:54.610479  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 12:17:54.610503  310517 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 12:17:54.611730  310517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:17:54.626852  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 12:17:54.626879  310517 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 12:17:54.630668  310517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:17:54.647602  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 12:17:54.647627  310517 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 12:17:54.664345  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 12:17:54.664370  310517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 12:17:54.684251  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 12:17:54.684297  310517 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 12:17:54.701306  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 12:17:54.701349  310517 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 12:17:54.722491  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 12:17:54.722515  310517 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 12:17:54.739508  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 12:17:54.739543  310517 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 12:17:54.756688  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:17:54.756712  310517 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 12:17:54.772197  310517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:17:55.836083  310517 node_ready.go:49] node "no-preload-406541" is "Ready"
	I1018 12:17:55.836122  310517 node_ready.go:38] duration metric: took 1.238531671s for node "no-preload-406541" to be "Ready" ...
	I1018 12:17:55.836137  310517 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:17:55.836191  310517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:17:56.359711  310517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.747950379s)
	I1018 12:17:56.359797  310517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.729091238s)
	I1018 12:17:56.359971  310517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.587738824s)
	I1018 12:17:56.360011  310517 api_server.go:72] duration metric: took 1.949706017s to wait for apiserver process to appear ...
	I1018 12:17:56.360037  310517 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:17:56.360102  310517 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 12:17:56.361552  310517 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-406541 addons enable metrics-server
	
	I1018 12:17:56.364492  310517 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:17:56.364521  310517 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:17:56.368067  310517 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 12:17:54.331037  309793 addons.go:514] duration metric: took 3.936153543s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1018 12:17:54.333424  309793 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1018 12:17:54.333454  309793 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1018 12:17:54.825907  309793 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:17:54.830944  309793 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 12:17:54.832163  309793 api_server.go:141] control plane version: v1.28.0
	I1018 12:17:54.832189  309793 api_server.go:131] duration metric: took 507.099443ms to wait for apiserver health ...
	I1018 12:17:54.832199  309793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:17:54.835509  309793 system_pods.go:59] 8 kube-system pods found
	I1018 12:17:54.835542  309793 system_pods.go:61] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:54.835553  309793 system_pods.go:61] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:17:54.835563  309793 system_pods.go:61] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:54.835570  309793 system_pods.go:61] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:17:54.835574  309793 system_pods.go:61] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:17:54.835581  309793 system_pods.go:61] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:54.835586  309793 system_pods.go:61] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:17:54.835591  309793 system_pods.go:61] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Running
	I1018 12:17:54.835598  309793 system_pods.go:74] duration metric: took 3.392852ms to wait for pod list to return data ...
	I1018 12:17:54.835607  309793 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:17:54.837737  309793 default_sa.go:45] found service account: "default"
	I1018 12:17:54.837754  309793 default_sa.go:55] duration metric: took 2.141424ms for default service account to be created ...
	I1018 12:17:54.837775  309793 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:17:54.841320  309793 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:54.841349  309793 system_pods.go:89] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:54.841357  309793 system_pods.go:89] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:17:54.841362  309793 system_pods.go:89] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:54.841369  309793 system_pods.go:89] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:17:54.841374  309793 system_pods.go:89] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:17:54.841384  309793 system_pods.go:89] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:54.841392  309793 system_pods.go:89] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:17:54.841398  309793 system_pods.go:89] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Running
	I1018 12:17:54.841405  309793 system_pods.go:126] duration metric: took 3.625267ms to wait for k8s-apps to be running ...
	I1018 12:17:54.841413  309793 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:17:54.841453  309793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:17:54.856451  309793 system_svc.go:56] duration metric: took 15.027046ms WaitForService to wait for kubelet
	I1018 12:17:54.856503  309793 kubeadm.go:586] duration metric: took 4.461779541s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:54.856526  309793 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:17:54.859431  309793 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:17:54.859451  309793 node_conditions.go:123] node cpu capacity is 8
	I1018 12:17:54.859464  309793 node_conditions.go:105] duration metric: took 2.933654ms to run NodePressure ...
	I1018 12:17:54.859475  309793 start.go:241] waiting for startup goroutines ...
	I1018 12:17:54.859481  309793 start.go:246] waiting for cluster config update ...
	I1018 12:17:54.859495  309793 start.go:255] writing updated cluster config ...
	I1018 12:17:54.859732  309793 ssh_runner.go:195] Run: rm -f paused
	I1018 12:17:54.864583  309793 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:17:54.870139  309793 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-s4wnq" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:17:56.877733  309793 pod_ready.go:104] pod "coredns-5dd5756b68-s4wnq" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 12:17:47 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:47.8292191Z" level=info msg="Starting container: 091c8a673a1911370d2f2ad7e74a80a3a420c35563d3798711db1ea98bf691fc" id=2774a724-81f0-4e87-b490-13acbef56007 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:17:47 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:47.831279085Z" level=info msg="Started container" PID=1844 containerID=091c8a673a1911370d2f2ad7e74a80a3a420c35563d3798711db1ea98bf691fc description=kube-system/coredns-66bc5c9577-7qgqj/coredns id=2774a724-81f0-4e87-b490-13acbef56007 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d849e87ff80d2162d767d998e857ebb5692a8f16f36cb795fbb774945e832496
	Oct 18 12:17:50 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:50.960196794Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ae40eca9-066e-47b2-aa91-249efaec53f1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:17:50 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:50.960315883Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:17:50 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:50.966181163Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fd66c8f3d942b35ab0f788e0249f78e3280e87fd7e8dbae206cc3b69e891e104 UID:cefc36cd-351a-479e-b06d-eca09ed979eb NetNS:/var/run/netns/c563573d-74a5-45e9-b0ed-3dd323a3850f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00100e2b0}] Aliases:map[]}"
	Oct 18 12:17:50 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:50.96622494Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 12:17:50 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:50.97835168Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fd66c8f3d942b35ab0f788e0249f78e3280e87fd7e8dbae206cc3b69e891e104 UID:cefc36cd-351a-479e-b06d-eca09ed979eb NetNS:/var/run/netns/c563573d-74a5-45e9-b0ed-3dd323a3850f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00100e2b0}] Aliases:map[]}"
	Oct 18 12:17:50 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:50.978523601Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 12:17:50 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:50.979545812Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 12:17:50 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:50.980841736Z" level=info msg="Ran pod sandbox fd66c8f3d942b35ab0f788e0249f78e3280e87fd7e8dbae206cc3b69e891e104 with infra container: default/busybox/POD" id=ae40eca9-066e-47b2-aa91-249efaec53f1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:17:50 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:50.98220101Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9adf12a5-865f-4c17-bd3c-b67b553399cb name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:17:50 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:50.982355761Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9adf12a5-865f-4c17-bd3c-b67b553399cb name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:17:50 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:50.98240512Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9adf12a5-865f-4c17-bd3c-b67b553399cb name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:17:50 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:50.983365898Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a3fa1400-ecfb-4377-91ef-97df56718227 name=/runtime.v1.ImageService/PullImage
	Oct 18 12:17:50 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:50.985093873Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 12:17:52 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:52.343936425Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=a3fa1400-ecfb-4377-91ef-97df56718227 name=/runtime.v1.ImageService/PullImage
	Oct 18 12:17:52 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:52.344852515Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5453ceba-229d-4607-811d-d7705915c102 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:17:52 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:52.346442679Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a530cef7-0727-41f4-a67e-bd595509abf2 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:17:52 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:52.350268155Z" level=info msg="Creating container: default/busybox/busybox" id=4e07e388-40fd-479c-948b-fbff9331fc02 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:17:52 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:52.351059413Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:17:52 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:52.35516315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:17:52 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:52.35567232Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:17:52 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:52.387963683Z" level=info msg="Created container e5bdf18b96732495f526bc746a8fb5d2802d5a4b82fbd59988b975b9301d6537: default/busybox/busybox" id=4e07e388-40fd-479c-948b-fbff9331fc02 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:17:52 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:52.388729051Z" level=info msg="Starting container: e5bdf18b96732495f526bc746a8fb5d2802d5a4b82fbd59988b975b9301d6537" id=e108a5f6-551a-4df5-995d-1ca2b2843749 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:17:52 default-k8s-diff-port-028309 crio[780]: time="2025-10-18T12:17:52.390957438Z" level=info msg="Started container" PID=1918 containerID=e5bdf18b96732495f526bc746a8fb5d2802d5a4b82fbd59988b975b9301d6537 description=default/busybox/busybox id=e108a5f6-551a-4df5-995d-1ca2b2843749 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd66c8f3d942b35ab0f788e0249f78e3280e87fd7e8dbae206cc3b69e891e104
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	e5bdf18b96732       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   fd66c8f3d942b       busybox                                                default
	091c8a673a191       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   d849e87ff80d2       coredns-66bc5c9577-7qgqj                               kube-system
	95cf31f440193       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   e9f9058da97e5       storage-provisioner                                    kube-system
	c4300ee5b8fd8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      24 seconds ago      Running             kindnet-cni               0                   05a874405cb9d       kindnet-hbfgg                                          kube-system
	2d7a0d23d3fdc       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   8b1884feff504       kube-proxy-bffkr                                       kube-system
	ab20d9f12c69e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   9a02966eda2c6       kube-apiserver-default-k8s-diff-port-028309            kube-system
	c6c1fee11f452       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   85b04c78b16c2       etcd-default-k8s-diff-port-028309                      kube-system
	b2e843e33d639       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   95fb6a1038088       kube-controller-manager-default-k8s-diff-port-028309   kube-system
	31be00ff8063d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   bec29f8d50bbd       kube-scheduler-default-k8s-diff-port-028309            kube-system
	
	
	==> coredns [091c8a673a1911370d2f2ad7e74a80a3a420c35563d3798711db1ea98bf691fc] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58719 - 44501 "HINFO IN 6317830053392203974.4640400121392324663. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.415807724s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-028309
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-028309
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=default-k8s-diff-port-028309
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_17_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:17:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-028309
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:18:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:18:00 +0000   Sat, 18 Oct 2025 12:17:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:18:00 +0000   Sat, 18 Oct 2025 12:17:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:18:00 +0000   Sat, 18 Oct 2025 12:17:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:18:00 +0000   Sat, 18 Oct 2025 12:17:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-028309
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                ff570318-6181-45ed-80f8-45dccb2d1794
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-7qgqj                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-default-k8s-diff-port-028309                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-hbfgg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-028309             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-028309    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-bffkr                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-028309             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node default-k8s-diff-port-028309 event: Registered Node default-k8s-diff-port-028309 in Controller
	  Normal  NodeReady                14s                kubelet          Node default-k8s-diff-port-028309 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +11.948953] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 93 07 de 40 6d 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 2f a5 3a 37 fc 08 06
	[  +0.204454] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 4b 47 1f ce e5 08 06
	[Oct18 12:16] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 88 62 1b dd a7 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 f1 aa 42 b3 1d 08 06
	[  +0.000901] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +26.035563] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 9e 15 3f 0e e1 08 06
	[  +0.000631] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 55 46 ae a1 7f 08 06
	[  +2.492998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 63 10 7e 7b f1 08 06
	[  +0.001695] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	[ +18.118461] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e eb 77 72 c6 18 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	
	
	==> etcd [c6c1fee11f452565f1c77a84233b8141567b6b4d6af554e88296d533cb299b06] <==
	{"level":"warn","ts":"2025-10-18T12:17:26.762622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.770167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.780241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.787297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.794942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.803869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.811189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.819562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.828310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.840275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.852700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.862498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.871223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.879612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.887231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.896854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.904607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.914012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.922513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.930694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.940058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.948936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.963796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:26.978895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:27.049907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58128","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:18:01 up  1:00,  0 user,  load average: 4.70, 4.21, 2.58
	Linux default-k8s-diff-port-028309 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c4300ee5b8fd8ed51e9f5ad96712819e638f5b3393821085fb83e160ca21e6a4] <==
	I1018 12:17:36.933115       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:17:36.933387       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 12:17:36.933528       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:17:36.933544       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:17:36.933553       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:17:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:17:37.137935       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:17:37.137968       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:17:37.137978       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:17:37.138153       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:17:37.438811       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:17:37.438836       1 metrics.go:72] Registering metrics
	I1018 12:17:37.438893       1 controller.go:711] "Syncing nftables rules"
	I1018 12:17:47.138823       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 12:17:47.138877       1 main.go:301] handling current node
	I1018 12:17:57.138798       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 12:17:57.138835       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ab20d9f12c69e10f5696187d3f28873946e303c232ef3a02bddc65ed08e3d6ea] <==
	I1018 12:17:27.654686       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 12:17:27.655387       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:17:27.672130       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 12:17:27.672853       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 12:17:27.673550       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:17:27.808620       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:17:28.508599       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 12:17:28.512523       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 12:17:28.512544       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:17:29.079609       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:17:29.118565       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:17:29.213553       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 12:17:29.220324       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1018 12:17:29.221592       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:17:29.226482       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:17:29.899800       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:17:30.402170       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:17:30.412442       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 12:17:30.419209       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:17:35.557352       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:17:35.564133       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:17:35.704625       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:17:35.801957       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 12:17:35.801957       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1018 12:17:59.756160       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:37106: use of closed network connection
	
	
	==> kube-controller-manager [b2e843e33d6394f311952df24bbede30de64956434235034cea67c30aa8c4612] <==
	I1018 12:17:34.878903       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 12:17:34.886801       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 12:17:34.899022       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 12:17:34.899138       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 12:17:34.899145       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 12:17:34.899271       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 12:17:34.899408       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 12:17:34.899427       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:17:34.899439       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:17:34.899462       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:17:34.899646       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 12:17:34.899702       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:17:34.899917       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:17:34.900075       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:17:34.900159       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 12:17:34.900282       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 12:17:34.900463       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 12:17:34.900931       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 12:17:34.903438       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 12:17:34.903465       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 12:17:34.904407       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 12:17:34.905643       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:17:34.908186       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:17:34.914400       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:17:49.852413       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2d7a0d23d3fdcc413cd8b772f283cab5451bf837dbf35079ef9fd8d4eb5bdb4e] <==
	I1018 12:17:36.814288       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:17:36.878851       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:17:36.979190       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:17:36.979233       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1018 12:17:36.979325       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:17:36.999202       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:17:36.999278       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:17:37.005033       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:17:37.005488       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:17:37.005517       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:17:37.007413       1 config.go:200] "Starting service config controller"
	I1018 12:17:37.007447       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:17:37.007465       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:17:37.007473       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:17:37.007516       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:17:37.007531       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:17:37.007566       1 config.go:309] "Starting node config controller"
	I1018 12:17:37.007571       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:17:37.007582       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:17:37.108563       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:17:37.108580       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:17:37.108600       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [31be00ff8063d9e41075e43a0237e82a72c022f3c75368ce99d2d17285ffd607] <==
	I1018 12:17:28.078249       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:17:28.081269       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:17:28.081324       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:17:28.081730       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:17:28.082003       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:17:28.083241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 12:17:28.083326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:17:28.085265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:17:28.085300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:17:28.085381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:17:28.085379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:17:28.085419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:17:28.085440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:17:28.085567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:17:28.085839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:17:28.085849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:17:28.085863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:17:28.086092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:17:28.086420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:17:28.086583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:17:28.086731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:17:28.086787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:17:28.086851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:17:28.086844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1018 12:17:29.381819       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:17:35 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:35.862347    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d988f171-de9d-485c-b4db-67222e30fc25-xtables-lock\") pod \"kube-proxy-bffkr\" (UID: \"d988f171-de9d-485c-b4db-67222e30fc25\") " pod="kube-system/kube-proxy-bffkr"
	Oct 18 12:17:35 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:35.862397    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/672043e3-34ce-4800-8142-07ba221b21bc-cni-cfg\") pod \"kindnet-hbfgg\" (UID: \"672043e3-34ce-4800-8142-07ba221b21bc\") " pod="kube-system/kindnet-hbfgg"
	Oct 18 12:17:35 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:35.862414    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/672043e3-34ce-4800-8142-07ba221b21bc-xtables-lock\") pod \"kindnet-hbfgg\" (UID: \"672043e3-34ce-4800-8142-07ba221b21bc\") " pod="kube-system/kindnet-hbfgg"
	Oct 18 12:17:35 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:35.862451    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d988f171-de9d-485c-b4db-67222e30fc25-lib-modules\") pod \"kube-proxy-bffkr\" (UID: \"d988f171-de9d-485c-b4db-67222e30fc25\") " pod="kube-system/kube-proxy-bffkr"
	Oct 18 12:17:35 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:35.862476    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2tbn\" (UniqueName: \"kubernetes.io/projected/d988f171-de9d-485c-b4db-67222e30fc25-kube-api-access-c2tbn\") pod \"kube-proxy-bffkr\" (UID: \"d988f171-de9d-485c-b4db-67222e30fc25\") " pod="kube-system/kube-proxy-bffkr"
	Oct 18 12:17:35 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:35.862544    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sp8x\" (UniqueName: \"kubernetes.io/projected/672043e3-34ce-4800-8142-07ba221b21bc-kube-api-access-7sp8x\") pod \"kindnet-hbfgg\" (UID: \"672043e3-34ce-4800-8142-07ba221b21bc\") " pod="kube-system/kindnet-hbfgg"
	Oct 18 12:17:35 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:35.862629    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d988f171-de9d-485c-b4db-67222e30fc25-kube-proxy\") pod \"kube-proxy-bffkr\" (UID: \"d988f171-de9d-485c-b4db-67222e30fc25\") " pod="kube-system/kube-proxy-bffkr"
	Oct 18 12:17:35 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:35.862665    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/672043e3-34ce-4800-8142-07ba221b21bc-lib-modules\") pod \"kindnet-hbfgg\" (UID: \"672043e3-34ce-4800-8142-07ba221b21bc\") " pod="kube-system/kindnet-hbfgg"
	Oct 18 12:17:35 default-k8s-diff-port-028309 kubelet[1318]: E1018 12:17:35.971103    1318 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 18 12:17:35 default-k8s-diff-port-028309 kubelet[1318]: E1018 12:17:35.971144    1318 projected.go:196] Error preparing data for projected volume kube-api-access-7sp8x for pod kube-system/kindnet-hbfgg: configmap "kube-root-ca.crt" not found
	Oct 18 12:17:35 default-k8s-diff-port-028309 kubelet[1318]: E1018 12:17:35.971154    1318 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 18 12:17:35 default-k8s-diff-port-028309 kubelet[1318]: E1018 12:17:35.971172    1318 projected.go:196] Error preparing data for projected volume kube-api-access-c2tbn for pod kube-system/kube-proxy-bffkr: configmap "kube-root-ca.crt" not found
	Oct 18 12:17:35 default-k8s-diff-port-028309 kubelet[1318]: E1018 12:17:35.971239    1318 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/672043e3-34ce-4800-8142-07ba221b21bc-kube-api-access-7sp8x podName:672043e3-34ce-4800-8142-07ba221b21bc nodeName:}" failed. No retries permitted until 2025-10-18 12:17:36.471209302 +0000 UTC m=+6.316119647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7sp8x" (UniqueName: "kubernetes.io/projected/672043e3-34ce-4800-8142-07ba221b21bc-kube-api-access-7sp8x") pod "kindnet-hbfgg" (UID: "672043e3-34ce-4800-8142-07ba221b21bc") : configmap "kube-root-ca.crt" not found
	Oct 18 12:17:35 default-k8s-diff-port-028309 kubelet[1318]: E1018 12:17:35.971258    1318 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d988f171-de9d-485c-b4db-67222e30fc25-kube-api-access-c2tbn podName:d988f171-de9d-485c-b4db-67222e30fc25 nodeName:}" failed. No retries permitted until 2025-10-18 12:17:36.471249673 +0000 UTC m=+6.316160013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c2tbn" (UniqueName: "kubernetes.io/projected/d988f171-de9d-485c-b4db-67222e30fc25-kube-api-access-c2tbn") pod "kube-proxy-bffkr" (UID: "d988f171-de9d-485c-b4db-67222e30fc25") : configmap "kube-root-ca.crt" not found
	Oct 18 12:17:37 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:37.290928    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hbfgg" podStartSLOduration=2.290908004 podStartE2EDuration="2.290908004s" podCreationTimestamp="2025-10-18 12:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:37.290710332 +0000 UTC m=+7.135620678" watchObservedRunningTime="2025-10-18 12:17:37.290908004 +0000 UTC m=+7.135818352"
	Oct 18 12:17:37 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:37.303258    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bffkr" podStartSLOduration=2.303235195 podStartE2EDuration="2.303235195s" podCreationTimestamp="2025-10-18 12:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:37.303035105 +0000 UTC m=+7.147945451" watchObservedRunningTime="2025-10-18 12:17:37.303235195 +0000 UTC m=+7.148145542"
	Oct 18 12:17:47 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:47.438323    1318 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 12:17:47 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:47.555674    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8a70ca43-431c-461f-bac2-f916aa44de50-tmp\") pod \"storage-provisioner\" (UID: \"8a70ca43-431c-461f-bac2-f916aa44de50\") " pod="kube-system/storage-provisioner"
	Oct 18 12:17:47 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:47.555718    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smgjd\" (UniqueName: \"kubernetes.io/projected/8a70ca43-431c-461f-bac2-f916aa44de50-kube-api-access-smgjd\") pod \"storage-provisioner\" (UID: \"8a70ca43-431c-461f-bac2-f916aa44de50\") " pod="kube-system/storage-provisioner"
	Oct 18 12:17:47 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:47.555736    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee994967-1cb7-4583-ba0d-debf8ccc08e1-config-volume\") pod \"coredns-66bc5c9577-7qgqj\" (UID: \"ee994967-1cb7-4583-ba0d-debf8ccc08e1\") " pod="kube-system/coredns-66bc5c9577-7qgqj"
	Oct 18 12:17:47 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:47.555750    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzxk6\" (UniqueName: \"kubernetes.io/projected/ee994967-1cb7-4583-ba0d-debf8ccc08e1-kube-api-access-mzxk6\") pod \"coredns-66bc5c9577-7qgqj\" (UID: \"ee994967-1cb7-4583-ba0d-debf8ccc08e1\") " pod="kube-system/coredns-66bc5c9577-7qgqj"
	Oct 18 12:17:48 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:48.318876    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7qgqj" podStartSLOduration=12.318850887 podStartE2EDuration="12.318850887s" podCreationTimestamp="2025-10-18 12:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:48.318611984 +0000 UTC m=+18.163522338" watchObservedRunningTime="2025-10-18 12:17:48.318850887 +0000 UTC m=+18.163761234"
	Oct 18 12:17:48 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:48.330159    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.330135004 podStartE2EDuration="13.330135004s" podCreationTimestamp="2025-10-18 12:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:48.329489375 +0000 UTC m=+18.174399722" watchObservedRunningTime="2025-10-18 12:17:48.330135004 +0000 UTC m=+18.175045351"
	Oct 18 12:17:50 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:50.773459    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn2dc\" (UniqueName: \"kubernetes.io/projected/cefc36cd-351a-479e-b06d-eca09ed979eb-kube-api-access-mn2dc\") pod \"busybox\" (UID: \"cefc36cd-351a-479e-b06d-eca09ed979eb\") " pod="default/busybox"
	Oct 18 12:17:53 default-k8s-diff-port-028309 kubelet[1318]: I1018 12:17:53.336683    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.973606557 podStartE2EDuration="3.336658436s" podCreationTimestamp="2025-10-18 12:17:50 +0000 UTC" firstStartedPulling="2025-10-18 12:17:50.982823513 +0000 UTC m=+20.827733844" lastFinishedPulling="2025-10-18 12:17:52.345875382 +0000 UTC m=+22.190785723" observedRunningTime="2025-10-18 12:17:53.336292183 +0000 UTC m=+23.181202531" watchObservedRunningTime="2025-10-18 12:17:53.336658436 +0000 UTC m=+23.181568785"
	
	
	==> storage-provisioner [95cf31f44019314f14c32c500018201849538bff5f56bab6a4268bc82cf269eb] <==
	I1018 12:17:47.833059       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 12:17:47.842032       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 12:17:47.842089       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 12:17:47.844350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:47.849425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:17:47.849671       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 12:17:47.849752       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b5d62124-6ee2-44d3-a6fa-ae6c6c57818d", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-028309_848d230d-03b4-4e3d-8d4a-75552365895b became leader
	I1018 12:17:47.849829       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-028309_848d230d-03b4-4e3d-8d4a-75552365895b!
	W1018 12:17:47.853096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:47.858482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:17:47.950836       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-028309_848d230d-03b4-4e3d-8d4a-75552365895b!
	W1018 12:17:49.862259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:49.866906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:51.870464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:51.874458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:53.878329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:53.887068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:55.892126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:55.896716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:57.900658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:57.905273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:59.909100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:59.913407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-028309 -n default-k8s-diff-port-028309
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-028309 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-175371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-175371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (289.476647ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-175371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-175371 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-175371 describe deploy/metrics-server -n kube-system: exit status 1 (75.940568ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-175371 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-175371
helpers_test.go:243: (dbg) docker inspect embed-certs-175371:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1",
	        "Created": "2025-10-18T12:16:56.477755693Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 297298,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:16:56.539000878Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1/hostname",
	        "HostsPath": "/var/lib/docker/containers/62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1/hosts",
	        "LogPath": "/var/lib/docker/containers/62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1/62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1-json.log",
	        "Name": "/embed-certs-175371",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-175371:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-175371",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1",
	                "LowerDir": "/var/lib/docker/overlay2/5e06ef0c32a59fe4b04f9f9b75061096d71e1402dd79ce7cee08e3d509e9b62d-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e06ef0c32a59fe4b04f9f9b75061096d71e1402dd79ce7cee08e3d509e9b62d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e06ef0c32a59fe4b04f9f9b75061096d71e1402dd79ce7cee08e3d509e9b62d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e06ef0c32a59fe4b04f9f9b75061096d71e1402dd79ce7cee08e3d509e9b62d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-175371",
	                "Source": "/var/lib/docker/volumes/embed-certs-175371/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-175371",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-175371",
	                "name.minikube.sigs.k8s.io": "embed-certs-175371",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "030f3be7734c465dee1e7451095edd22c6728c9e20af2ee2e88cd565f8030f87",
	            "SandboxKey": "/var/run/docker/netns/030f3be7734c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-175371": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:06:30:d8:0c:11",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb34d5222966a405cf9b383e8910070a73637f333cd8b420bf2f4d8d0d6f8e0",
	                    "EndpointID": "8b80c757a97bec29a641fb0894a1ef8d168f1832e750a46e1417b6d4a19c6f09",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-175371",
	                        "62e5625dfcf2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175371 -n embed-certs-175371
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-175371 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-175371 logs -n 25: (1.493603179s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-376567 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ ssh     │ -p bridge-376567 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo containerd config dump                                                                                                                                                                                                  │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo crio config                                                                                                                                                                                                             │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ delete  │ -p bridge-376567                                                                                                                                                                                                                              │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ delete  │ -p disable-driver-mounts-200198                                                                                                                                                                                                               │ disable-driver-mounts-200198 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-024443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p old-k8s-version-024443 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-406541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p no-preload-406541 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-024443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p old-k8s-version-024443 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-406541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p no-preload-406541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-028309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-028309 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-175371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:17:45
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:17:45.818265  310517 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:17:45.818534  310517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:17:45.818545  310517 out.go:374] Setting ErrFile to fd 2...
	I1018 12:17:45.818549  310517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:17:45.818813  310517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:17:45.819346  310517 out.go:368] Setting JSON to false
	I1018 12:17:45.820567  310517 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3614,"bootTime":1760786252,"procs":386,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:17:45.820686  310517 start.go:141] virtualization: kvm guest
	I1018 12:17:45.822791  310517 out.go:179] * [no-preload-406541] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:17:45.824116  310517 notify.go:220] Checking for updates...
	I1018 12:17:45.824155  310517 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:17:45.825571  310517 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:17:45.826898  310517 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:17:45.828390  310517 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:17:45.829891  310517 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:17:45.831226  310517 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:17:45.832937  310517 config.go:182] Loaded profile config "no-preload-406541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:17:45.833485  310517 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:17:45.858009  310517 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:17:45.858151  310517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:17:45.918498  310517 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:17:45.906848188 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:17:45.918661  310517 docker.go:318] overlay module found
	I1018 12:17:45.920998  310517 out.go:179] * Using the docker driver based on existing profile
	I1018 12:17:45.922451  310517 start.go:305] selected driver: docker
	I1018 12:17:45.922486  310517 start.go:925] validating driver "docker" against &{Name:no-preload-406541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-406541 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:17:45.922591  310517 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:17:45.923204  310517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:17:45.980172  310517 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:17:45.968945214 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:17:45.980486  310517 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:45.980513  310517 cni.go:84] Creating CNI manager for ""
	I1018 12:17:45.980554  310517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:17:45.980590  310517 start.go:349] cluster config:
	{Name:no-preload-406541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-406541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:17:45.982504  310517 out.go:179] * Starting "no-preload-406541" primary control-plane node in "no-preload-406541" cluster
	I1018 12:17:45.984470  310517 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:17:45.985833  310517 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:17:45.986928  310517 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:17:45.986988  310517 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:17:45.987099  310517 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/config.json ...
	I1018 12:17:45.987161  310517 cache.go:107] acquiring lock: {Name:mk2851c90c3cee4b8dc905a54300119306c34425 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:45.987186  310517 cache.go:107] acquiring lock: {Name:mk7beac465d3e33866f36c7d2d6c2d5c7648cadc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:45.987187  310517 cache.go:107] acquiring lock: {Name:mk12378f271fac5391329588d22fd9f6b5f2efe9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:45.987245  310517 cache.go:107] acquiring lock: {Name:mkf899cc61754339eb7c16b16d780a0d64247c63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:45.987276  310517 cache.go:115] /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 12:17:45.987288  310517 cache.go:115] /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 12:17:45.987289  310517 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 121.364µs
	I1018 12:17:45.987274  310517 cache.go:115] /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 12:17:45.987298  310517 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 54.761µs
	I1018 12:17:45.987306  310517 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 12:17:45.987308  310517 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 12:17:45.987307  310517 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 135.39µs
	I1018 12:17:45.987274  310517 cache.go:107] acquiring lock: {Name:mkc51ddd9714d0bce2fec89ca6505008f746ff3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:45.987322  310517 cache.go:115] /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 12:17:45.987324  310517 cache.go:107] acquiring lock: {Name:mk96d90bcd247dcb2d931dae4c9362f05288238f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:45.987329  310517 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 180.853µs
	I1018 12:17:45.987345  310517 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 12:17:45.987322  310517 cache.go:107] acquiring lock: {Name:mk574d4568922c0dc77dc7227f9dde52e8f9b559 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:45.987360  310517 cache.go:115] /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1018 12:17:45.987368  310517 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 45.589µs
	I1018 12:17:45.987375  310517 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 12:17:45.987319  310517 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 12:17:45.987373  310517 cache.go:107] acquiring lock: {Name:mkd955903c0f718f7272b2c35c91d555532a9b1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:45.987420  310517 cache.go:115] /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 12:17:45.987439  310517 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 217.587µs
	I1018 12:17:45.987455  310517 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 12:17:45.987446  310517 cache.go:115] /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 12:17:45.987476  310517 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 194.761µs
	I1018 12:17:45.987480  310517 cache.go:115] /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 12:17:45.987488  310517 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 12:17:45.987495  310517 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 175.237µs
	I1018 12:17:45.987511  310517 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21647-5865/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 12:17:45.987520  310517 cache.go:87] Successfully saved all images to host disk.
	I1018 12:17:46.008377  310517 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:17:46.008400  310517 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:17:46.008414  310517 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:17:46.008441  310517 start.go:360] acquireMachinesLock for no-preload-406541: {Name:mk0766028e9fb536dc77f73d30a9c9fc1a771d70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:46.008506  310517 start.go:364] duration metric: took 46.934µs to acquireMachinesLock for "no-preload-406541"
	I1018 12:17:46.008529  310517 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:17:46.008539  310517 fix.go:54] fixHost starting: 
	I1018 12:17:46.008842  310517 cli_runner.go:164] Run: docker container inspect no-preload-406541 --format={{.State.Status}}
	I1018 12:17:46.028023  310517 fix.go:112] recreateIfNeeded on no-preload-406541: state=Stopped err=<nil>
	W1018 12:17:46.028064  310517 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:17:43.005465  309793 out.go:252] * Restarting existing docker container for "old-k8s-version-024443" ...
	I1018 12:17:43.005538  309793 cli_runner.go:164] Run: docker start old-k8s-version-024443
	I1018 12:17:43.262721  309793 cli_runner.go:164] Run: docker container inspect old-k8s-version-024443 --format={{.State.Status}}
	I1018 12:17:43.281797  309793 kic.go:430] container "old-k8s-version-024443" state is running.
	I1018 12:17:43.282231  309793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-024443
	I1018 12:17:43.301262  309793 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443/config.json ...
	I1018 12:17:43.301521  309793 machine.go:93] provisionDockerMachine start ...
	I1018 12:17:43.301602  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:43.321409  309793 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:43.321666  309793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 12:17:43.321682  309793 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:17:43.322298  309793 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56892->127.0.0.1:33108: read: connection reset by peer
	I1018 12:17:46.463800  309793 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-024443
	
	I1018 12:17:46.463827  309793 ubuntu.go:182] provisioning hostname "old-k8s-version-024443"
	I1018 12:17:46.463875  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:46.483267  309793 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:46.483549  309793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 12:17:46.483573  309793 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-024443 && echo "old-k8s-version-024443" | sudo tee /etc/hostname
	I1018 12:17:46.634868  309793 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-024443
	
	I1018 12:17:46.634965  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:46.655200  309793 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:46.655507  309793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 12:17:46.655535  309793 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-024443' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-024443/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-024443' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:17:46.788444  309793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:17:46.788471  309793 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:17:46.788521  309793 ubuntu.go:190] setting up certificates
	I1018 12:17:46.788535  309793 provision.go:84] configureAuth start
	I1018 12:17:46.788589  309793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-024443
	I1018 12:17:46.806062  309793 provision.go:143] copyHostCerts
	I1018 12:17:46.806115  309793 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:17:46.806125  309793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:17:46.806195  309793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:17:46.806317  309793 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:17:46.806330  309793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:17:46.806357  309793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:17:46.806433  309793 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:17:46.806440  309793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:17:46.806463  309793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:17:46.806523  309793 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-024443 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-024443]
	I1018 12:17:47.384178  309793 provision.go:177] copyRemoteCerts
	I1018 12:17:47.384234  309793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:17:47.384267  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:47.402639  309793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:17:47.501579  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:17:47.519836  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 12:17:47.537654  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:17:47.555436  309793 provision.go:87] duration metric: took 766.883501ms to configureAuth
	I1018 12:17:47.555469  309793 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:17:47.555679  309793 config.go:182] Loaded profile config "old-k8s-version-024443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 12:17:47.555808  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:47.576349  309793 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:47.576603  309793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1018 12:17:47.576621  309793 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:17:47.887626  309793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:17:47.887664  309793 machine.go:96] duration metric: took 4.586119524s to provisionDockerMachine
	I1018 12:17:47.887677  309793 start.go:293] postStartSetup for "old-k8s-version-024443" (driver="docker")
	I1018 12:17:47.887689  309793 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:17:47.887791  309793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:17:47.887843  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:47.906882  309793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:17:48.005047  309793 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:17:48.008814  309793 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:17:48.008839  309793 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:17:48.008852  309793 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:17:48.008904  309793 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:17:48.009008  309793 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:17:48.009131  309793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:17:48.017240  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:17:48.035887  309793 start.go:296] duration metric: took 148.197454ms for postStartSetup
	I1018 12:17:48.035967  309793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:17:48.036009  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:48.054834  309793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:17:48.149141  309793 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:17:48.154007  309793 fix.go:56] duration metric: took 5.168121201s for fixHost
	I1018 12:17:48.154038  309793 start.go:83] releasing machines lock for "old-k8s-version-024443", held for 5.168177217s
	I1018 12:17:48.154126  309793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-024443
	I1018 12:17:48.173319  309793 ssh_runner.go:195] Run: cat /version.json
	I1018 12:17:48.173373  309793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:17:48.173422  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:48.173423  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:48.192911  309793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:17:48.193887  309793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:17:48.354736  309793 ssh_runner.go:195] Run: systemctl --version
	I1018 12:17:48.362750  309793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:17:48.401550  309793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:17:48.406989  309793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:17:48.407062  309793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:17:48.415599  309793 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:17:48.415624  309793 start.go:495] detecting cgroup driver to use...
	I1018 12:17:48.415659  309793 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:17:48.415701  309793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:17:48.431310  309793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:17:48.444921  309793 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:17:48.444986  309793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:17:48.460916  309793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:17:48.474427  309793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:17:48.559191  309793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:17:48.644895  309793 docker.go:234] disabling docker service ...
	I1018 12:17:48.644960  309793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:17:48.659881  309793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:17:48.674682  309793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:17:48.762387  309793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:17:48.842445  309793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:17:48.855257  309793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:17:48.870442  309793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 12:17:48.870509  309793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:48.879856  309793 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:17:48.879925  309793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:48.889083  309793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:48.898192  309793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:48.907723  309793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:17:48.916533  309793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:48.926511  309793 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:48.935628  309793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:48.945196  309793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:17:48.953082  309793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:17:48.961367  309793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:49.045719  309793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:17:49.159358  309793 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:17:49.159419  309793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:17:49.163614  309793 start.go:563] Will wait 60s for crictl version
	I1018 12:17:49.163679  309793 ssh_runner.go:195] Run: which crictl
	I1018 12:17:49.167344  309793 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:17:49.192247  309793 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:17:49.192325  309793 ssh_runner.go:195] Run: crio --version
	I1018 12:17:49.221474  309793 ssh_runner.go:195] Run: crio --version
	I1018 12:17:49.251652  309793 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	W1018 12:17:46.692698  303392 node_ready.go:57] node "default-k8s-diff-port-028309" has "Ready":"False" status (will retry)
	I1018 12:17:47.692824  303392 node_ready.go:49] node "default-k8s-diff-port-028309" is "Ready"
	I1018 12:17:47.692857  303392 node_ready.go:38] duration metric: took 12.003720394s for node "default-k8s-diff-port-028309" to be "Ready" ...
	I1018 12:17:47.692874  303392 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:17:47.692929  303392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:17:47.705357  303392 api_server.go:72] duration metric: took 12.286538652s to wait for apiserver process to appear ...
	I1018 12:17:47.705379  303392 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:17:47.705395  303392 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 12:17:47.710222  303392 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1018 12:17:47.711107  303392 api_server.go:141] control plane version: v1.34.1
	I1018 12:17:47.711130  303392 api_server.go:131] duration metric: took 5.745655ms to wait for apiserver health ...
	I1018 12:17:47.711140  303392 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:17:47.714331  303392 system_pods.go:59] 8 kube-system pods found
	I1018 12:17:47.714361  303392 system_pods.go:61] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:47.714368  303392 system_pods.go:61] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running
	I1018 12:17:47.714373  303392 system_pods.go:61] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:17:47.714377  303392 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running
	I1018 12:17:47.714380  303392 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running
	I1018 12:17:47.714384  303392 system_pods.go:61] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:17:47.714387  303392 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running
	I1018 12:17:47.714392  303392 system_pods.go:61] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:47.714401  303392 system_pods.go:74] duration metric: took 3.25643ms to wait for pod list to return data ...
	I1018 12:17:47.714409  303392 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:17:47.716820  303392 default_sa.go:45] found service account: "default"
	I1018 12:17:47.716836  303392 default_sa.go:55] duration metric: took 2.423051ms for default service account to be created ...
	I1018 12:17:47.716844  303392 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:17:47.719390  303392 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:47.719418  303392 system_pods.go:89] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:47.719427  303392 system_pods.go:89] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running
	I1018 12:17:47.719436  303392 system_pods.go:89] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:17:47.719442  303392 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running
	I1018 12:17:47.719450  303392 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running
	I1018 12:17:47.719463  303392 system_pods.go:89] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:17:47.719469  303392 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running
	I1018 12:17:47.719481  303392 system_pods.go:89] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:47.719504  303392 retry.go:31] will retry after 235.205246ms: missing components: kube-dns
	I1018 12:17:47.958395  303392 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:47.958430  303392 system_pods.go:89] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:47.958438  303392 system_pods.go:89] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running
	I1018 12:17:47.958445  303392 system_pods.go:89] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:17:47.958450  303392 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running
	I1018 12:17:47.958455  303392 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running
	I1018 12:17:47.958460  303392 system_pods.go:89] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:17:47.958466  303392 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running
	I1018 12:17:47.958473  303392 system_pods.go:89] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:47.958493  303392 retry.go:31] will retry after 235.162839ms: missing components: kube-dns
	I1018 12:17:48.197604  303392 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:48.197647  303392 system_pods.go:89] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:48.197657  303392 system_pods.go:89] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running
	I1018 12:17:48.197665  303392 system_pods.go:89] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:17:48.197671  303392 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running
	I1018 12:17:48.197676  303392 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running
	I1018 12:17:48.197689  303392 system_pods.go:89] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:17:48.197696  303392 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running
	I1018 12:17:48.197707  303392 system_pods.go:89] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:48.197730  303392 retry.go:31] will retry after 462.764ms: missing components: kube-dns
	I1018 12:17:48.665815  303392 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:48.665847  303392 system_pods.go:89] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Running
	I1018 12:17:48.665855  303392 system_pods.go:89] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running
	I1018 12:17:48.665861  303392 system_pods.go:89] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:17:48.665866  303392 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running
	I1018 12:17:48.665871  303392 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running
	I1018 12:17:48.665876  303392 system_pods.go:89] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:17:48.665882  303392 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running
	I1018 12:17:48.665887  303392 system_pods.go:89] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Running
	I1018 12:17:48.665898  303392 system_pods.go:126] duration metric: took 949.048167ms to wait for k8s-apps to be running ...
	I1018 12:17:48.665912  303392 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:17:48.665972  303392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:17:48.679470  303392 system_svc.go:56] duration metric: took 13.550292ms WaitForService to wait for kubelet
	I1018 12:17:48.679503  303392 kubeadm.go:586] duration metric: took 13.26068638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:48.679523  303392 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:17:48.682666  303392 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:17:48.682691  303392 node_conditions.go:123] node cpu capacity is 8
	I1018 12:17:48.682704  303392 node_conditions.go:105] duration metric: took 3.176473ms to run NodePressure ...
	I1018 12:17:48.682715  303392 start.go:241] waiting for startup goroutines ...
	I1018 12:17:48.682723  303392 start.go:246] waiting for cluster config update ...
	I1018 12:17:48.682735  303392 start.go:255] writing updated cluster config ...
	I1018 12:17:48.683022  303392 ssh_runner.go:195] Run: rm -f paused
	I1018 12:17:48.686875  303392 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:17:48.690618  303392 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7qgqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:48.695149  303392 pod_ready.go:94] pod "coredns-66bc5c9577-7qgqj" is "Ready"
	I1018 12:17:48.695177  303392 pod_ready.go:86] duration metric: took 4.535928ms for pod "coredns-66bc5c9577-7qgqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:48.697658  303392 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:48.702367  303392 pod_ready.go:94] pod "etcd-default-k8s-diff-port-028309" is "Ready"
	I1018 12:17:48.702388  303392 pod_ready.go:86] duration metric: took 4.706068ms for pod "etcd-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:48.704683  303392 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:48.713736  303392 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-028309" is "Ready"
	I1018 12:17:48.713782  303392 pod_ready.go:86] duration metric: took 9.071932ms for pod "kube-apiserver-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:48.716521  303392 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:49.091627  303392 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-028309" is "Ready"
	I1018 12:17:49.091653  303392 pod_ready.go:86] duration metric: took 375.10527ms for pod "kube-controller-manager-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:49.291903  303392 pod_ready.go:83] waiting for pod "kube-proxy-bffkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:49.691733  303392 pod_ready.go:94] pod "kube-proxy-bffkr" is "Ready"
	I1018 12:17:49.691780  303392 pod_ready.go:86] duration metric: took 399.85273ms for pod "kube-proxy-bffkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:49.892297  303392 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:50.291380  303392 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-028309" is "Ready"
	I1018 12:17:50.291413  303392 pod_ready.go:86] duration metric: took 399.08983ms for pod "kube-scheduler-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:50.291429  303392 pod_ready.go:40] duration metric: took 1.604526893s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:17:50.348944  303392 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:17:50.353333  303392 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-028309" cluster and "default" namespace by default
	I1018 12:17:49.253107  309793 cli_runner.go:164] Run: docker network inspect old-k8s-version-024443 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:17:49.270942  309793 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 12:17:49.275182  309793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:17:49.286027  309793 kubeadm.go:883] updating cluster {Name:old-k8s-version-024443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-024443 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:17:49.286182  309793 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 12:17:49.286226  309793 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:17:49.319603  309793 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:17:49.319623  309793 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:17:49.319666  309793 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:17:49.345865  309793 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:17:49.345892  309793 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:17:49.345902  309793 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1018 12:17:49.345988  309793 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-024443 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-024443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:17:49.346052  309793 ssh_runner.go:195] Run: crio config
	I1018 12:17:49.398407  309793 cni.go:84] Creating CNI manager for ""
	I1018 12:17:49.398428  309793 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:17:49.398444  309793 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:17:49.398467  309793 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-024443 NodeName:old-k8s-version-024443 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:17:49.398596  309793 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-024443"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:17:49.398652  309793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1018 12:17:49.407843  309793 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:17:49.407920  309793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:17:49.416414  309793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1018 12:17:49.430154  309793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:17:49.443468  309793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1018 12:17:49.456536  309793 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:17:49.460426  309793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:17:49.470456  309793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:49.552794  309793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:17:49.573678  309793 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443 for IP: 192.168.85.2
	I1018 12:17:49.573704  309793 certs.go:195] generating shared ca certs ...
	I1018 12:17:49.573726  309793 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:49.574000  309793 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:17:49.574063  309793 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:17:49.574077  309793 certs.go:257] generating profile certs ...
	I1018 12:17:49.574205  309793 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443/client.key
	I1018 12:17:49.574303  309793 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443/apiserver.key.40a89ae9
	I1018 12:17:49.574348  309793 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443/proxy-client.key
	I1018 12:17:49.574449  309793 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem (1338 bytes)
	W1018 12:17:49.574476  309793 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360_empty.pem, impossibly tiny 0 bytes
	I1018 12:17:49.574485  309793 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:17:49.574506  309793 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:17:49.574528  309793 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:17:49.574547  309793 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 12:17:49.574584  309793 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:17:49.575220  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:17:49.595131  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:17:49.615276  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:17:49.636377  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:17:49.660922  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 12:17:49.685225  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:17:49.705144  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:17:49.725305  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/old-k8s-version-024443/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:17:49.745531  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:17:49.766346  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem --> /usr/share/ca-certificates/9360.pem (1338 bytes)
	I1018 12:17:49.787134  309793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /usr/share/ca-certificates/93602.pem (1708 bytes)
	I1018 12:17:49.806241  309793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:17:49.819877  309793 ssh_runner.go:195] Run: openssl version
	I1018 12:17:49.827197  309793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:17:49.837292  309793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:49.841647  309793 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:49.841706  309793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:49.882591  309793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:17:49.891421  309793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9360.pem && ln -fs /usr/share/ca-certificates/9360.pem /etc/ssl/certs/9360.pem"
	I1018 12:17:49.900888  309793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9360.pem
	I1018 12:17:49.905260  309793 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9360.pem
	I1018 12:17:49.905326  309793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9360.pem
	I1018 12:17:49.943114  309793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9360.pem /etc/ssl/certs/51391683.0"
	I1018 12:17:49.952744  309793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93602.pem && ln -fs /usr/share/ca-certificates/93602.pem /etc/ssl/certs/93602.pem"
	I1018 12:17:49.962938  309793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93602.pem
	I1018 12:17:49.966930  309793 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/93602.pem
	I1018 12:17:49.966991  309793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93602.pem
	I1018 12:17:50.003652  309793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93602.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:17:50.012856  309793 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:17:50.017068  309793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:17:50.054430  309793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:17:50.097562  309793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:17:50.143080  309793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:17:50.189734  309793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:17:50.248940  309793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:17:50.301380  309793 kubeadm.go:400] StartCluster: {Name:old-k8s-version-024443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-024443 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:17:50.301490  309793 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:17:50.301551  309793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:17:50.340573  309793 cri.go:89] found id: "c1618cf2491e60c5f264f84236c3e565212efb40b779ad4dfc51547e5f21be79"
	I1018 12:17:50.340602  309793 cri.go:89] found id: "b9fd7b97fe26af7875425214d9a97dc3856195255cc6b76a7313c710605084a3"
	I1018 12:17:50.340608  309793 cri.go:89] found id: "c664320629fb594f08d0b5b11b435430f4ed28eaed8d94b8f5952428aa171a2f"
	I1018 12:17:50.340613  309793 cri.go:89] found id: "cd847940cd839a77a7dd6283540c50c9b5c0f1ec5b64bfe2ed49728cb0998923"
	I1018 12:17:50.340617  309793 cri.go:89] found id: ""
	I1018 12:17:50.340989  309793 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 12:17:50.357230  309793 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:17:50Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:17:50.357305  309793 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:17:50.367509  309793 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 12:17:50.367534  309793 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 12:17:50.367615  309793 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 12:17:50.378221  309793 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:17:50.379393  309793 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-024443" does not appear in /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:17:50.380074  309793 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-5865/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-024443" cluster setting kubeconfig missing "old-k8s-version-024443" context setting]
	I1018 12:17:50.380999  309793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:50.382855  309793 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 12:17:50.392271  309793 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 12:17:50.392309  309793 kubeadm.go:601] duration metric: took 24.768829ms to restartPrimaryControlPlane
	I1018 12:17:50.392321  309793 kubeadm.go:402] duration metric: took 90.950451ms to StartCluster
	I1018 12:17:50.392339  309793 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:50.392392  309793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:17:50.394423  309793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:50.394689  309793 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:17:50.394877  309793 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:17:50.394965  309793 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-024443"
	I1018 12:17:50.394965  309793 config.go:182] Loaded profile config "old-k8s-version-024443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 12:17:50.394990  309793 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-024443"
	W1018 12:17:50.394999  309793 addons.go:247] addon storage-provisioner should already be in state true
	I1018 12:17:50.395011  309793 addons.go:69] Setting dashboard=true in profile "old-k8s-version-024443"
	I1018 12:17:50.395024  309793 host.go:66] Checking if "old-k8s-version-024443" exists ...
	I1018 12:17:50.395025  309793 addons.go:238] Setting addon dashboard=true in "old-k8s-version-024443"
	W1018 12:17:50.395035  309793 addons.go:247] addon dashboard should already be in state true
	I1018 12:17:50.395059  309793 host.go:66] Checking if "old-k8s-version-024443" exists ...
	I1018 12:17:50.395077  309793 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-024443"
	I1018 12:17:50.395096  309793 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-024443"
	I1018 12:17:50.395386  309793 cli_runner.go:164] Run: docker container inspect old-k8s-version-024443 --format={{.State.Status}}
	I1018 12:17:50.395576  309793 cli_runner.go:164] Run: docker container inspect old-k8s-version-024443 --format={{.State.Status}}
	I1018 12:17:50.395883  309793 cli_runner.go:164] Run: docker container inspect old-k8s-version-024443 --format={{.State.Status}}
	I1018 12:17:50.400893  309793 out.go:179] * Verifying Kubernetes components...
	I1018 12:17:50.402806  309793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:50.432834  309793 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:17:50.433968  309793 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-024443"
	W1018 12:17:50.434047  309793 addons.go:247] addon default-storageclass should already be in state true
	I1018 12:17:50.434111  309793 host.go:66] Checking if "old-k8s-version-024443" exists ...
	I1018 12:17:50.434428  309793 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:17:50.434457  309793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:17:50.434519  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:50.435101  309793 cli_runner.go:164] Run: docker container inspect old-k8s-version-024443 --format={{.State.Status}}
	I1018 12:17:50.438201  309793 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 12:17:50.439409  309793 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1018 12:17:46.486330  295702 node_ready.go:57] node "embed-certs-175371" has "Ready":"False" status (will retry)
	W1018 12:17:48.985939  295702 node_ready.go:57] node "embed-certs-175371" has "Ready":"False" status (will retry)
	I1018 12:17:46.029837  310517 out.go:252] * Restarting existing docker container for "no-preload-406541" ...
	I1018 12:17:46.029917  310517 cli_runner.go:164] Run: docker start no-preload-406541
	I1018 12:17:46.292072  310517 cli_runner.go:164] Run: docker container inspect no-preload-406541 --format={{.State.Status}}
	I1018 12:17:46.312729  310517 kic.go:430] container "no-preload-406541" state is running.
	I1018 12:17:46.313203  310517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-406541
	I1018 12:17:46.334301  310517 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/config.json ...
	I1018 12:17:46.334550  310517 machine.go:93] provisionDockerMachine start ...
	I1018 12:17:46.334625  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:46.355571  310517 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:46.355816  310517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 12:17:46.355831  310517 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:17:46.356532  310517 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40868->127.0.0.1:33113: read: connection reset by peer
	I1018 12:17:49.498107  310517 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-406541
	
	I1018 12:17:49.498139  310517 ubuntu.go:182] provisioning hostname "no-preload-406541"
	I1018 12:17:49.498216  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:49.522328  310517 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:49.522570  310517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 12:17:49.522585  310517 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-406541 && echo "no-preload-406541" | sudo tee /etc/hostname
	I1018 12:17:49.672945  310517 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-406541
	
	I1018 12:17:49.673079  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:49.694618  310517 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:49.694858  310517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 12:17:49.694877  310517 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-406541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-406541/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-406541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:17:49.833408  310517 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:17:49.833445  310517 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:17:49.833506  310517 ubuntu.go:190] setting up certificates
	I1018 12:17:49.833526  310517 provision.go:84] configureAuth start
	I1018 12:17:49.833597  310517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-406541
	I1018 12:17:49.853415  310517 provision.go:143] copyHostCerts
	I1018 12:17:49.853475  310517 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:17:49.853499  310517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:17:49.853580  310517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:17:49.853696  310517 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:17:49.853709  310517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:17:49.853751  310517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:17:49.853857  310517 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:17:49.853871  310517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:17:49.853908  310517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:17:49.853979  310517 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.no-preload-406541 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-406541]
	I1018 12:17:50.440481  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 12:17:50.440498  309793 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 12:17:50.440555  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:50.471267  309793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:17:50.473997  309793 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:17:50.474041  309793 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:17:50.474133  309793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:17:50.481664  309793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:17:50.506684  309793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:17:50.594327  309793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:17:50.612619  309793 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-024443" to be "Ready" ...
	I1018 12:17:50.615556  309793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:17:50.624079  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 12:17:50.624103  309793 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 12:17:50.640897  309793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:17:50.646776  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 12:17:50.646802  309793 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 12:17:50.677507  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 12:17:50.677533  309793 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 12:17:50.698558  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 12:17:50.698586  309793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 12:17:50.717037  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 12:17:50.717067  309793 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 12:17:50.737193  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 12:17:50.737216  309793 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 12:17:50.755325  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 12:17:50.755350  309793 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 12:17:50.769185  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 12:17:50.769212  309793 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 12:17:50.783320  309793 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:17:50.783347  309793 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 12:17:50.798045  309793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:17:51.016379  310517 provision.go:177] copyRemoteCerts
	I1018 12:17:51.016450  310517 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:17:51.016487  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:51.036946  310517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:17:51.136726  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:17:51.155743  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 12:17:51.176377  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 12:17:51.195810  310517 provision.go:87] duration metric: took 1.362266572s to configureAuth
	I1018 12:17:51.195837  310517 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:17:51.196034  310517 config.go:182] Loaded profile config "no-preload-406541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:17:51.196137  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:51.215756  310517 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:51.216008  310517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1018 12:17:51.216026  310517 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:17:51.522495  310517 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:17:51.522526  310517 machine.go:96] duration metric: took 5.187956853s to provisionDockerMachine
	I1018 12:17:51.522539  310517 start.go:293] postStartSetup for "no-preload-406541" (driver="docker")
	I1018 12:17:51.522554  310517 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:17:51.522617  310517 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:17:51.522661  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:51.544856  310517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:17:51.647828  310517 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:17:51.651575  310517 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:17:51.651603  310517 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:17:51.651614  310517 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:17:51.651671  310517 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:17:51.651740  310517 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:17:51.651874  310517 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:17:51.660448  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:17:51.679182  310517 start.go:296] duration metric: took 156.627397ms for postStartSetup
	I1018 12:17:51.679256  310517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:17:51.679298  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:51.698458  310517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:17:51.793433  310517 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:17:51.798480  310517 fix.go:56] duration metric: took 5.789933491s for fixHost
	I1018 12:17:51.798511  310517 start.go:83] releasing machines lock for "no-preload-406541", held for 5.789991279s
	I1018 12:17:51.798584  310517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-406541
	I1018 12:17:51.816606  310517 ssh_runner.go:195] Run: cat /version.json
	I1018 12:17:51.816625  310517 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:17:51.816658  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:51.816675  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:51.835906  310517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:17:51.836069  310517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:17:51.992984  310517 ssh_runner.go:195] Run: systemctl --version
	I1018 12:17:52.000371  310517 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:17:52.042608  310517 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:17:52.048811  310517 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:17:52.048884  310517 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:17:52.058459  310517 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:17:52.058487  310517 start.go:495] detecting cgroup driver to use...
	I1018 12:17:52.058516  310517 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:17:52.058562  310517 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:17:52.075638  310517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:17:52.091731  310517 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:17:52.091834  310517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:17:52.110791  310517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:17:52.127170  310517 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:17:52.230093  310517 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:17:52.341976  310517 docker.go:234] disabling docker service ...
	I1018 12:17:52.342043  310517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:17:52.359910  310517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:17:52.375430  310517 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:17:52.469889  310517 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:17:52.563511  310517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:17:52.579096  310517 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:17:52.594906  310517 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:17:52.594969  310517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:52.605127  310517 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:17:52.605201  310517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:52.615031  310517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:52.628121  310517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:52.638844  310517 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:17:52.648105  310517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:52.658328  310517 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:52.667871  310517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:17:52.677553  310517 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:17:52.685836  310517 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:17:52.694567  310517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:52.792011  310517 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:17:52.939411  310517 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:17:52.939478  310517 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:17:52.943888  310517 start.go:563] Will wait 60s for crictl version
	I1018 12:17:52.943953  310517 ssh_runner.go:195] Run: which crictl
	I1018 12:17:52.948811  310517 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:17:52.981686  310517 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:17:52.981782  310517 ssh_runner.go:195] Run: crio --version
	I1018 12:17:53.012712  310517 ssh_runner.go:195] Run: crio --version
	I1018 12:17:53.065174  310517 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:17:53.045966  309793 node_ready.go:49] node "old-k8s-version-024443" is "Ready"
	I1018 12:17:53.046002  309793 node_ready.go:38] duration metric: took 2.433336279s for node "old-k8s-version-024443" to be "Ready" ...
	I1018 12:17:53.046019  309793 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:17:53.046072  309793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:17:53.784407  309793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.168814086s)
	I1018 12:17:53.784417  309793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.143486767s)
	I1018 12:17:54.324158  309793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.526042493s)
	I1018 12:17:54.325032  309793 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.278943628s)
	I1018 12:17:54.325076  309793 api_server.go:72] duration metric: took 3.930353705s to wait for apiserver process to appear ...
	I1018 12:17:54.325083  309793 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:17:54.325101  309793 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:17:54.327905  309793 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-024443 addons enable metrics-server
	
	I1018 12:17:54.329691  309793 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1018 12:17:53.066489  310517 cli_runner.go:164] Run: docker network inspect no-preload-406541 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:17:53.089888  310517 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 12:17:53.094609  310517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:17:53.111803  310517 kubeadm.go:883] updating cluster {Name:no-preload-406541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-406541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:17:53.111946  310517 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:17:53.112010  310517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:17:53.150660  310517 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:17:53.150683  310517 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:17:53.150690  310517 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1018 12:17:53.150808  310517 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-406541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-406541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:17:53.150893  310517 ssh_runner.go:195] Run: crio config
	I1018 12:17:53.204319  310517 cni.go:84] Creating CNI manager for ""
	I1018 12:17:53.204355  310517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:17:53.204376  310517 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:17:53.204405  310517 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-406541 NodeName:no-preload-406541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:17:53.204562  310517 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-406541"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:17:53.204633  310517 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:17:53.215460  310517 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:17:53.215537  310517 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:17:53.224850  310517 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 12:17:53.240461  310517 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:17:53.261283  310517 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1018 12:17:53.277344  310517 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:17:53.281549  310517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:17:53.292682  310517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:53.396838  310517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:17:53.418362  310517 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541 for IP: 192.168.94.2
	I1018 12:17:53.418391  310517 certs.go:195] generating shared ca certs ...
	I1018 12:17:53.418414  310517 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:53.418584  310517 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:17:53.418650  310517 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:17:53.418668  310517 certs.go:257] generating profile certs ...
	I1018 12:17:53.418799  310517 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/client.key
	I1018 12:17:53.418882  310517 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/apiserver.key.4f4cf101
	I1018 12:17:53.418928  310517 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/proxy-client.key
	I1018 12:17:53.419104  310517 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem (1338 bytes)
	W1018 12:17:53.419149  310517 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360_empty.pem, impossibly tiny 0 bytes
	I1018 12:17:53.419161  310517 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:17:53.419188  310517 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:17:53.419218  310517 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:17:53.419250  310517 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 12:17:53.419302  310517 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:17:53.420113  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:17:53.441462  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:17:53.461597  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:17:53.484380  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:17:53.522157  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 12:17:53.547074  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:17:53.574502  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:17:53.595620  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/no-preload-406541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:17:53.615749  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /usr/share/ca-certificates/93602.pem (1708 bytes)
	I1018 12:17:53.640103  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:17:53.662488  310517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem --> /usr/share/ca-certificates/9360.pem (1338 bytes)
	I1018 12:17:53.685642  310517 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:17:53.701661  310517 ssh_runner.go:195] Run: openssl version
	I1018 12:17:53.710140  310517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9360.pem && ln -fs /usr/share/ca-certificates/9360.pem /etc/ssl/certs/9360.pem"
	I1018 12:17:53.722521  310517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9360.pem
	I1018 12:17:53.727297  310517 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9360.pem
	I1018 12:17:53.727357  310517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9360.pem
	I1018 12:17:53.777720  310517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9360.pem /etc/ssl/certs/51391683.0"
	I1018 12:17:53.788688  310517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93602.pem && ln -fs /usr/share/ca-certificates/93602.pem /etc/ssl/certs/93602.pem"
	I1018 12:17:53.801703  310517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93602.pem
	I1018 12:17:53.809690  310517 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/93602.pem
	I1018 12:17:53.809779  310517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93602.pem
	I1018 12:17:53.850035  310517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93602.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:17:53.861385  310517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:17:53.871682  310517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:53.876219  310517 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:53.876284  310517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:53.914881  310517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:17:53.925639  310517 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:17:53.930104  310517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:17:53.983731  310517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:17:54.050477  310517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:17:54.116416  310517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:17:54.181269  310517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:17:54.244500  310517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:17:54.302454  310517 kubeadm.go:400] StartCluster: {Name:no-preload-406541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-406541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:17:54.302534  310517 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:17:54.302581  310517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:17:54.347167  310517 cri.go:89] found id: "5d618e751f9ba92d0e9b73cc902c60091fa7fc312b17c0a534306ddf5267331e"
	I1018 12:17:54.347193  310517 cri.go:89] found id: "133fd0664569cae2a09912a39da9ebed72def40b96fa66996c7f6cbd105deba3"
	I1018 12:17:54.347199  310517 cri.go:89] found id: "37d2f600fcf0c009e16115908271757cab49845434c4b2db0ade3132da9aaff7"
	I1018 12:17:54.347203  310517 cri.go:89] found id: "786f9a8bc0ec93e60a032d4b983f3c3c2cd05a95a06cfa33a7e7a81ed64a5f13"
	I1018 12:17:54.347207  310517 cri.go:89] found id: ""
	I1018 12:17:54.347261  310517 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 12:17:54.365891  310517 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:17:54Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:17:54.366004  310517 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:17:54.379456  310517 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 12:17:54.379483  310517 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 12:17:54.379530  310517 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 12:17:54.390456  310517 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:17:54.391845  310517 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-406541" does not appear in /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:17:54.392750  310517 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-5865/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-406541" cluster setting kubeconfig missing "no-preload-406541" context setting]
	I1018 12:17:54.394396  310517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:54.397106  310517 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 12:17:54.408092  310517 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1018 12:17:54.408143  310517 kubeadm.go:601] duration metric: took 28.647208ms to restartPrimaryControlPlane
	I1018 12:17:54.408155  310517 kubeadm.go:402] duration metric: took 105.709981ms to StartCluster
	I1018 12:17:54.408175  310517 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:54.408260  310517 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:17:54.410019  310517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:54.410279  310517 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:17:54.410342  310517 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:17:54.410450  310517 addons.go:69] Setting storage-provisioner=true in profile "no-preload-406541"
	I1018 12:17:54.410461  310517 addons.go:69] Setting dashboard=true in profile "no-preload-406541"
	I1018 12:17:54.410473  310517 addons.go:238] Setting addon storage-provisioner=true in "no-preload-406541"
	W1018 12:17:54.410482  310517 addons.go:247] addon storage-provisioner should already be in state true
	I1018 12:17:54.410486  310517 addons.go:238] Setting addon dashboard=true in "no-preload-406541"
	W1018 12:17:54.410495  310517 addons.go:247] addon dashboard should already be in state true
	I1018 12:17:54.410491  310517 addons.go:69] Setting default-storageclass=true in profile "no-preload-406541"
	I1018 12:17:54.410513  310517 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-406541"
	I1018 12:17:54.410522  310517 host.go:66] Checking if "no-preload-406541" exists ...
	I1018 12:17:54.410559  310517 host.go:66] Checking if "no-preload-406541" exists ...
	I1018 12:17:54.410874  310517 cli_runner.go:164] Run: docker container inspect no-preload-406541 --format={{.State.Status}}
	I1018 12:17:54.410511  310517 config.go:182] Loaded profile config "no-preload-406541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:17:54.411038  310517 cli_runner.go:164] Run: docker container inspect no-preload-406541 --format={{.State.Status}}
	I1018 12:17:54.411137  310517 cli_runner.go:164] Run: docker container inspect no-preload-406541 --format={{.State.Status}}
	I1018 12:17:54.412688  310517 out.go:179] * Verifying Kubernetes components...
	I1018 12:17:54.414332  310517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:54.443523  310517 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 12:17:54.444965  310517 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 12:17:54.446231  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 12:17:54.446264  310517 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 12:17:54.446237  310517 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1018 12:17:50.986964  295702 node_ready.go:57] node "embed-certs-175371" has "Ready":"False" status (will retry)
	W1018 12:17:53.485593  295702 node_ready.go:57] node "embed-certs-175371" has "Ready":"False" status (will retry)
	W1018 12:17:55.491134  295702 node_ready.go:57] node "embed-certs-175371" has "Ready":"False" status (will retry)
	I1018 12:17:54.446322  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:54.447508  310517 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:17:54.447558  310517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:17:54.447622  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:54.448174  310517 addons.go:238] Setting addon default-storageclass=true in "no-preload-406541"
	W1018 12:17:54.448200  310517 addons.go:247] addon default-storageclass should already be in state true
	I1018 12:17:54.448229  310517 host.go:66] Checking if "no-preload-406541" exists ...
	I1018 12:17:54.448712  310517 cli_runner.go:164] Run: docker container inspect no-preload-406541 --format={{.State.Status}}
	I1018 12:17:54.482549  310517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:17:54.488303  310517 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:17:54.488381  310517 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:17:54.488468  310517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:17:54.489309  310517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:17:54.516388  310517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:17:54.583220  310517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:17:54.597546  310517 node_ready.go:35] waiting up to 6m0s for node "no-preload-406541" to be "Ready" ...
	I1018 12:17:54.610479  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 12:17:54.610503  310517 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 12:17:54.611730  310517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:17:54.626852  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 12:17:54.626879  310517 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 12:17:54.630668  310517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:17:54.647602  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 12:17:54.647627  310517 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 12:17:54.664345  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 12:17:54.664370  310517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 12:17:54.684251  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 12:17:54.684297  310517 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 12:17:54.701306  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 12:17:54.701349  310517 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 12:17:54.722491  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 12:17:54.722515  310517 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 12:17:54.739508  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 12:17:54.739543  310517 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 12:17:54.756688  310517 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:17:54.756712  310517 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 12:17:54.772197  310517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:17:55.836083  310517 node_ready.go:49] node "no-preload-406541" is "Ready"
	I1018 12:17:55.836122  310517 node_ready.go:38] duration metric: took 1.238531671s for node "no-preload-406541" to be "Ready" ...
	I1018 12:17:55.836137  310517 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:17:55.836191  310517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:17:56.359711  310517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.747950379s)
	I1018 12:17:56.359797  310517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.729091238s)
	I1018 12:17:56.359971  310517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.587738824s)
	I1018 12:17:56.360011  310517 api_server.go:72] duration metric: took 1.949706017s to wait for apiserver process to appear ...
	I1018 12:17:56.360037  310517 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:17:56.360102  310517 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 12:17:56.361552  310517 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-406541 addons enable metrics-server
	
	I1018 12:17:56.364492  310517 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:17:56.364521  310517 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:17:56.368067  310517 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 12:17:54.331037  309793 addons.go:514] duration metric: took 3.936153543s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1018 12:17:54.333424  309793 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1018 12:17:54.333454  309793 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1018 12:17:54.825907  309793 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:17:54.830944  309793 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 12:17:54.832163  309793 api_server.go:141] control plane version: v1.28.0
	I1018 12:17:54.832189  309793 api_server.go:131] duration metric: took 507.099443ms to wait for apiserver health ...
	I1018 12:17:54.832199  309793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:17:54.835509  309793 system_pods.go:59] 8 kube-system pods found
	I1018 12:17:54.835542  309793 system_pods.go:61] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:54.835553  309793 system_pods.go:61] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:17:54.835563  309793 system_pods.go:61] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:54.835570  309793 system_pods.go:61] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:17:54.835574  309793 system_pods.go:61] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:17:54.835581  309793 system_pods.go:61] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:54.835586  309793 system_pods.go:61] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:17:54.835591  309793 system_pods.go:61] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Running
	I1018 12:17:54.835598  309793 system_pods.go:74] duration metric: took 3.392852ms to wait for pod list to return data ...
	I1018 12:17:54.835607  309793 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:17:54.837737  309793 default_sa.go:45] found service account: "default"
	I1018 12:17:54.837754  309793 default_sa.go:55] duration metric: took 2.141424ms for default service account to be created ...
	I1018 12:17:54.837775  309793 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:17:54.841320  309793 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:54.841349  309793 system_pods.go:89] "coredns-5dd5756b68-s4wnq" [59e8e628-e270-400c-b0a5-a5aad16a309c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:54.841357  309793 system_pods.go:89] "etcd-old-k8s-version-024443" [c16041af-6f94-4167-a05b-b491760c7de5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:17:54.841362  309793 system_pods.go:89] "kindnet-g8pwk" [d825bcd2-5610-4618-a451-3781667da707] Running
	I1018 12:17:54.841369  309793 system_pods.go:89] "kube-apiserver-old-k8s-version-024443" [86e07595-eb3c-4df2-b7e6-d93041e09957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:17:54.841374  309793 system_pods.go:89] "kube-controller-manager-old-k8s-version-024443" [9753fb42-512c-49c6-95d4-a4b07489fe43] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:17:54.841384  309793 system_pods.go:89] "kube-proxy-tzlpd" [d19b38b0-d7bc-4c78-8c03-60b85301d9d4] Running
	I1018 12:17:54.841392  309793 system_pods.go:89] "kube-scheduler-old-k8s-version-024443" [a2c41a05-53e0-4335-9384-84812ba29928] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:17:54.841398  309793 system_pods.go:89] "storage-provisioner" [2f69c3ee-cd53-4da2-9101-f6e46fb2d81a] Running
	I1018 12:17:54.841405  309793 system_pods.go:126] duration metric: took 3.625267ms to wait for k8s-apps to be running ...
	I1018 12:17:54.841413  309793 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:17:54.841453  309793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:17:54.856451  309793 system_svc.go:56] duration metric: took 15.027046ms WaitForService to wait for kubelet
	I1018 12:17:54.856503  309793 kubeadm.go:586] duration metric: took 4.461779541s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:54.856526  309793 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:17:54.859431  309793 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:17:54.859451  309793 node_conditions.go:123] node cpu capacity is 8
	I1018 12:17:54.859464  309793 node_conditions.go:105] duration metric: took 2.933654ms to run NodePressure ...
	I1018 12:17:54.859475  309793 start.go:241] waiting for startup goroutines ...
	I1018 12:17:54.859481  309793 start.go:246] waiting for cluster config update ...
	I1018 12:17:54.859495  309793 start.go:255] writing updated cluster config ...
	I1018 12:17:54.859732  309793 ssh_runner.go:195] Run: rm -f paused
	I1018 12:17:54.864583  309793 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:17:54.870139  309793 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-s4wnq" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:17:56.877733  309793 pod_ready.go:104] pod "coredns-5dd5756b68-s4wnq" is not "Ready", error: <nil>
	W1018 12:17:57.985212  295702 node_ready.go:57] node "embed-certs-175371" has "Ready":"False" status (will retry)
	I1018 12:17:58.984931  295702 node_ready.go:49] node "embed-certs-175371" is "Ready"
	I1018 12:17:58.984963  295702 node_ready.go:38] duration metric: took 40.502714718s for node "embed-certs-175371" to be "Ready" ...
	I1018 12:17:58.984990  295702 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:17:58.985044  295702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:17:58.997807  295702 api_server.go:72] duration metric: took 40.825041937s to wait for apiserver process to appear ...
	I1018 12:17:58.997839  295702 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:17:58.997915  295702 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 12:17:59.003869  295702 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 12:17:59.004969  295702 api_server.go:141] control plane version: v1.34.1
	I1018 12:17:59.004992  295702 api_server.go:131] duration metric: took 7.146858ms to wait for apiserver health ...
	I1018 12:17:59.005000  295702 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:17:59.007977  295702 system_pods.go:59] 8 kube-system pods found
	I1018 12:17:59.008004  295702 system_pods.go:61] "coredns-66bc5c9577-b6h9l" [bf0c7f4f-476e-4faf-9159-580059735927] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:59.008011  295702 system_pods.go:61] "etcd-embed-certs-175371" [78ddf662-3465-4bf6-8514-500ccc419f56] Running
	I1018 12:17:59.008017  295702 system_pods.go:61] "kindnet-dxw8r" [c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da] Running
	I1018 12:17:59.008025  295702 system_pods.go:61] "kube-apiserver-embed-certs-175371" [4357b213-beda-4ed7-b5b7-8a7ee35900fe] Running
	I1018 12:17:59.008034  295702 system_pods.go:61] "kube-controller-manager-embed-certs-175371" [5f063dc0-4c2c-434c-a534-54e2ca90614f] Running
	I1018 12:17:59.008038  295702 system_pods.go:61] "kube-proxy-t2x4c" [9d5ade84-59a3-4948-ba28-a6663bd749ab] Running
	I1018 12:17:59.008041  295702 system_pods.go:61] "kube-scheduler-embed-certs-175371" [24ee0c7e-121d-42ff-ac1c-ce69f7cc6511] Running
	I1018 12:17:59.008046  295702 system_pods.go:61] "storage-provisioner" [d598f5a5-5d3e-4ad8-9266-ea4fee4648c7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:59.008053  295702 system_pods.go:74] duration metric: took 3.04809ms to wait for pod list to return data ...
	I1018 12:17:59.008063  295702 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:17:59.010290  295702 default_sa.go:45] found service account: "default"
	I1018 12:17:59.010308  295702 default_sa.go:55] duration metric: took 2.238903ms for default service account to be created ...
	I1018 12:17:59.010318  295702 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:17:59.012836  295702 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:59.012860  295702 system_pods.go:89] "coredns-66bc5c9577-b6h9l" [bf0c7f4f-476e-4faf-9159-580059735927] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:59.012865  295702 system_pods.go:89] "etcd-embed-certs-175371" [78ddf662-3465-4bf6-8514-500ccc419f56] Running
	I1018 12:17:59.012870  295702 system_pods.go:89] "kindnet-dxw8r" [c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da] Running
	I1018 12:17:59.012875  295702 system_pods.go:89] "kube-apiserver-embed-certs-175371" [4357b213-beda-4ed7-b5b7-8a7ee35900fe] Running
	I1018 12:17:59.012879  295702 system_pods.go:89] "kube-controller-manager-embed-certs-175371" [5f063dc0-4c2c-434c-a534-54e2ca90614f] Running
	I1018 12:17:59.012883  295702 system_pods.go:89] "kube-proxy-t2x4c" [9d5ade84-59a3-4948-ba28-a6663bd749ab] Running
	I1018 12:17:59.012886  295702 system_pods.go:89] "kube-scheduler-embed-certs-175371" [24ee0c7e-121d-42ff-ac1c-ce69f7cc6511] Running
	I1018 12:17:59.012893  295702 system_pods.go:89] "storage-provisioner" [d598f5a5-5d3e-4ad8-9266-ea4fee4648c7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:59.012911  295702 retry.go:31] will retry after 241.591552ms: missing components: kube-dns
	I1018 12:17:59.259191  295702 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:59.259228  295702 system_pods.go:89] "coredns-66bc5c9577-b6h9l" [bf0c7f4f-476e-4faf-9159-580059735927] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:59.259237  295702 system_pods.go:89] "etcd-embed-certs-175371" [78ddf662-3465-4bf6-8514-500ccc419f56] Running
	I1018 12:17:59.259245  295702 system_pods.go:89] "kindnet-dxw8r" [c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da] Running
	I1018 12:17:59.259251  295702 system_pods.go:89] "kube-apiserver-embed-certs-175371" [4357b213-beda-4ed7-b5b7-8a7ee35900fe] Running
	I1018 12:17:59.259257  295702 system_pods.go:89] "kube-controller-manager-embed-certs-175371" [5f063dc0-4c2c-434c-a534-54e2ca90614f] Running
	I1018 12:17:59.259261  295702 system_pods.go:89] "kube-proxy-t2x4c" [9d5ade84-59a3-4948-ba28-a6663bd749ab] Running
	I1018 12:17:59.259268  295702 system_pods.go:89] "kube-scheduler-embed-certs-175371" [24ee0c7e-121d-42ff-ac1c-ce69f7cc6511] Running
	I1018 12:17:59.259281  295702 system_pods.go:89] "storage-provisioner" [d598f5a5-5d3e-4ad8-9266-ea4fee4648c7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:59.259308  295702 retry.go:31] will retry after 322.27915ms: missing components: kube-dns
	I1018 12:17:59.585640  295702 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:59.585666  295702 system_pods.go:89] "coredns-66bc5c9577-b6h9l" [bf0c7f4f-476e-4faf-9159-580059735927] Running
	I1018 12:17:59.585671  295702 system_pods.go:89] "etcd-embed-certs-175371" [78ddf662-3465-4bf6-8514-500ccc419f56] Running
	I1018 12:17:59.585675  295702 system_pods.go:89] "kindnet-dxw8r" [c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da] Running
	I1018 12:17:59.585679  295702 system_pods.go:89] "kube-apiserver-embed-certs-175371" [4357b213-beda-4ed7-b5b7-8a7ee35900fe] Running
	I1018 12:17:59.585682  295702 system_pods.go:89] "kube-controller-manager-embed-certs-175371" [5f063dc0-4c2c-434c-a534-54e2ca90614f] Running
	I1018 12:17:59.585685  295702 system_pods.go:89] "kube-proxy-t2x4c" [9d5ade84-59a3-4948-ba28-a6663bd749ab] Running
	I1018 12:17:59.585688  295702 system_pods.go:89] "kube-scheduler-embed-certs-175371" [24ee0c7e-121d-42ff-ac1c-ce69f7cc6511] Running
	I1018 12:17:59.585692  295702 system_pods.go:89] "storage-provisioner" [d598f5a5-5d3e-4ad8-9266-ea4fee4648c7] Running
	I1018 12:17:59.585699  295702 system_pods.go:126] duration metric: took 575.376054ms to wait for k8s-apps to be running ...
	I1018 12:17:59.585706  295702 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:17:59.585770  295702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:17:59.599802  295702 system_svc.go:56] duration metric: took 14.086413ms WaitForService to wait for kubelet
	I1018 12:17:59.599828  295702 kubeadm.go:586] duration metric: took 41.427075903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:59.599847  295702 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:17:59.602717  295702 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:17:59.602746  295702 node_conditions.go:123] node cpu capacity is 8
	I1018 12:17:59.602808  295702 node_conditions.go:105] duration metric: took 2.954533ms to run NodePressure ...
	I1018 12:17:59.602828  295702 start.go:241] waiting for startup goroutines ...
	I1018 12:17:59.602843  295702 start.go:246] waiting for cluster config update ...
	I1018 12:17:59.602861  295702 start.go:255] writing updated cluster config ...
	I1018 12:17:59.603144  295702 ssh_runner.go:195] Run: rm -f paused
	I1018 12:17:59.607596  295702 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:17:59.611393  295702 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b6h9l" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:59.615965  295702 pod_ready.go:94] pod "coredns-66bc5c9577-b6h9l" is "Ready"
	I1018 12:17:59.615986  295702 pod_ready.go:86] duration metric: took 4.567753ms for pod "coredns-66bc5c9577-b6h9l" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:59.617859  295702 pod_ready.go:83] waiting for pod "etcd-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:59.621825  295702 pod_ready.go:94] pod "etcd-embed-certs-175371" is "Ready"
	I1018 12:17:59.621848  295702 pod_ready.go:86] duration metric: took 3.970403ms for pod "etcd-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:59.623535  295702 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:59.627033  295702 pod_ready.go:94] pod "kube-apiserver-embed-certs-175371" is "Ready"
	I1018 12:17:59.627055  295702 pod_ready.go:86] duration metric: took 3.495142ms for pod "kube-apiserver-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:59.631430  295702 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:00.012292  295702 pod_ready.go:94] pod "kube-controller-manager-embed-certs-175371" is "Ready"
	I1018 12:18:00.012324  295702 pod_ready.go:86] duration metric: took 380.871892ms for pod "kube-controller-manager-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:00.212659  295702 pod_ready.go:83] waiting for pod "kube-proxy-t2x4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:00.613260  295702 pod_ready.go:94] pod "kube-proxy-t2x4c" is "Ready"
	I1018 12:18:00.613306  295702 pod_ready.go:86] duration metric: took 400.618768ms for pod "kube-proxy-t2x4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:17:56.369186  310517 addons.go:514] duration metric: took 1.958847755s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 12:17:56.861071  310517 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 12:17:56.868564  310517 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:17:56.868598  310517 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:17:57.361086  310517 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 12:17:57.366180  310517 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1018 12:17:57.367189  310517 api_server.go:141] control plane version: v1.34.1
	I1018 12:17:57.367215  310517 api_server.go:131] duration metric: took 1.007126958s to wait for apiserver health ...
	I1018 12:17:57.367222  310517 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:17:57.370613  310517 system_pods.go:59] 8 kube-system pods found
	I1018 12:17:57.370646  310517 system_pods.go:61] "coredns-66bc5c9577-bwvrq" [eee9c519-7100-41a0-8a95-6daae8b6b46b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:57.370656  310517 system_pods.go:61] "etcd-no-preload-406541" [32415a7e-882e-4c2f-b369-3841d4c57482] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:17:57.370666  310517 system_pods.go:61] "kindnet-dwg7c" [d2ecaa2c-b1fd-4635-8521-39461256e9ec] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:17:57.370676  310517 system_pods.go:61] "kube-apiserver-no-preload-406541" [179f86d1-c11f-42fb-821a-a7c4877492d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:17:57.370688  310517 system_pods.go:61] "kube-controller-manager-no-preload-406541" [092fc484-967e-4890-aa37-e52f994dfb9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:17:57.370707  310517 system_pods.go:61] "kube-proxy-9vbmr" [396c662e-9914-4ffe-a26e-4fff6e123577] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:17:57.370715  310517 system_pods.go:61] "kube-scheduler-no-preload-406541" [08ef79d5-dedd-4034-8278-ddd13a8a6dbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:17:57.370723  310517 system_pods.go:61] "storage-provisioner" [7c61b5da-ef85-46ff-a054-051967cf9d79] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:57.370735  310517 system_pods.go:74] duration metric: took 3.505682ms to wait for pod list to return data ...
	I1018 12:17:57.370748  310517 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:17:57.373522  310517 default_sa.go:45] found service account: "default"
	I1018 12:17:57.373545  310517 default_sa.go:55] duration metric: took 2.79012ms for default service account to be created ...
	I1018 12:17:57.373556  310517 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:17:57.376686  310517 system_pods.go:86] 8 kube-system pods found
	I1018 12:17:57.376722  310517 system_pods.go:89] "coredns-66bc5c9577-bwvrq" [eee9c519-7100-41a0-8a95-6daae8b6b46b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:57.376732  310517 system_pods.go:89] "etcd-no-preload-406541" [32415a7e-882e-4c2f-b369-3841d4c57482] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:17:57.376749  310517 system_pods.go:89] "kindnet-dwg7c" [d2ecaa2c-b1fd-4635-8521-39461256e9ec] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:17:57.376792  310517 system_pods.go:89] "kube-apiserver-no-preload-406541" [179f86d1-c11f-42fb-821a-a7c4877492d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:17:57.376814  310517 system_pods.go:89] "kube-controller-manager-no-preload-406541" [092fc484-967e-4890-aa37-e52f994dfb9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:17:57.376823  310517 system_pods.go:89] "kube-proxy-9vbmr" [396c662e-9914-4ffe-a26e-4fff6e123577] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:17:57.376831  310517 system_pods.go:89] "kube-scheduler-no-preload-406541" [08ef79d5-dedd-4034-8278-ddd13a8a6dbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:17:57.376840  310517 system_pods.go:89] "storage-provisioner" [7c61b5da-ef85-46ff-a054-051967cf9d79] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:57.376859  310517 system_pods.go:126] duration metric: took 3.288262ms to wait for k8s-apps to be running ...
	I1018 12:17:57.376872  310517 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:17:57.376925  310517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:17:57.391183  310517 system_svc.go:56] duration metric: took 14.300525ms WaitForService to wait for kubelet
	I1018 12:17:57.391216  310517 kubeadm.go:586] duration metric: took 2.980911968s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:57.391252  310517 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:17:57.394410  310517 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:17:57.394441  310517 node_conditions.go:123] node cpu capacity is 8
	I1018 12:17:57.394456  310517 node_conditions.go:105] duration metric: took 3.196288ms to run NodePressure ...
	I1018 12:17:57.394470  310517 start.go:241] waiting for startup goroutines ...
	I1018 12:17:57.394483  310517 start.go:246] waiting for cluster config update ...
	I1018 12:17:57.394500  310517 start.go:255] writing updated cluster config ...
	I1018 12:17:57.394851  310517 ssh_runner.go:195] Run: rm -f paused
	I1018 12:17:57.399068  310517 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:17:57.403576  310517 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bwvrq" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:17:59.409069  310517 pod_ready.go:104] pod "coredns-66bc5c9577-bwvrq" is not "Ready", error: <nil>
	I1018 12:18:00.812738  295702 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:01.213132  295702 pod_ready.go:94] pod "kube-scheduler-embed-certs-175371" is "Ready"
	I1018 12:18:01.213166  295702 pod_ready.go:86] duration metric: took 400.31296ms for pod "kube-scheduler-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:01.213185  295702 pod_ready.go:40] duration metric: took 1.605551647s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:01.274899  295702 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:18:01.276745  295702 out.go:179] * Done! kubectl is now configured to use "embed-certs-175371" cluster and "default" namespace by default
	W1018 12:17:59.376586  309793 pod_ready.go:104] pod "coredns-5dd5756b68-s4wnq" is not "Ready", error: <nil>
	W1018 12:18:01.377801  309793 pod_ready.go:104] pod "coredns-5dd5756b68-s4wnq" is not "Ready", error: <nil>
	W1018 12:18:01.411091  310517 pod_ready.go:104] pod "coredns-66bc5c9577-bwvrq" is not "Ready", error: <nil>
	W1018 12:18:03.910103  310517 pod_ready.go:104] pod "coredns-66bc5c9577-bwvrq" is not "Ready", error: <nil>
	W1018 12:18:03.877423  309793 pod_ready.go:104] pod "coredns-5dd5756b68-s4wnq" is not "Ready", error: <nil>
	W1018 12:18:06.376622  309793 pod_ready.go:104] pod "coredns-5dd5756b68-s4wnq" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 12:17:59 embed-certs-175371 crio[769]: time="2025-10-18T12:17:59.125433812Z" level=info msg="Starting container: f8dd5362a667694cddbc9aa1ae78ce3214eb87eeb045aae8fbe72989fe033a67" id=4d628949-cee7-4bd7-a612-e9c06ad6f30b name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:17:59 embed-certs-175371 crio[769]: time="2025-10-18T12:17:59.127612258Z" level=info msg="Started container" PID=1831 containerID=f8dd5362a667694cddbc9aa1ae78ce3214eb87eeb045aae8fbe72989fe033a67 description=kube-system/coredns-66bc5c9577-b6h9l/coredns id=4d628949-cee7-4bd7-a612-e9c06ad6f30b name=/runtime.v1.RuntimeService/StartContainer sandboxID=b84582e84e9d9cbbc59ef01c60e5322a0517c4ff8dc5c5842642717402a08515
	Oct 18 12:18:01 embed-certs-175371 crio[769]: time="2025-10-18T12:18:01.779186812Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8ae985de-0b93-47f2-9c67-620e76453407 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:18:01 embed-certs-175371 crio[769]: time="2025-10-18T12:18:01.779327787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:01 embed-certs-175371 crio[769]: time="2025-10-18T12:18:01.79152744Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f847a3ad3ad741aa034cb37d245db4bdfac7824cef51cda9124caf2531da3a24 UID:d7e2785e-4860-4f2d-af78-a6a7770e8f29 NetNS:/var/run/netns/03d3b718-e5d2-4ebb-94ee-87d577280c04 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00064eb48}] Aliases:map[]}"
	Oct 18 12:18:01 embed-certs-175371 crio[769]: time="2025-10-18T12:18:01.791600022Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 12:18:01 embed-certs-175371 crio[769]: time="2025-10-18T12:18:01.805320792Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f847a3ad3ad741aa034cb37d245db4bdfac7824cef51cda9124caf2531da3a24 UID:d7e2785e-4860-4f2d-af78-a6a7770e8f29 NetNS:/var/run/netns/03d3b718-e5d2-4ebb-94ee-87d577280c04 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00064eb48}] Aliases:map[]}"
	Oct 18 12:18:01 embed-certs-175371 crio[769]: time="2025-10-18T12:18:01.805499254Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 12:18:01 embed-certs-175371 crio[769]: time="2025-10-18T12:18:01.80657111Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 12:18:01 embed-certs-175371 crio[769]: time="2025-10-18T12:18:01.807859826Z" level=info msg="Ran pod sandbox f847a3ad3ad741aa034cb37d245db4bdfac7824cef51cda9124caf2531da3a24 with infra container: default/busybox/POD" id=8ae985de-0b93-47f2-9c67-620e76453407 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:18:01 embed-certs-175371 crio[769]: time="2025-10-18T12:18:01.809537556Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5c117a76-1346-4c43-b85d-61f5b5eace0f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:01 embed-certs-175371 crio[769]: time="2025-10-18T12:18:01.809664941Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5c117a76-1346-4c43-b85d-61f5b5eace0f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:01 embed-certs-175371 crio[769]: time="2025-10-18T12:18:01.809706042Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5c117a76-1346-4c43-b85d-61f5b5eace0f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:01 embed-certs-175371 crio[769]: time="2025-10-18T12:18:01.81093842Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dfc644e0-71bc-4ce7-ad3d-b185895931b3 name=/runtime.v1.ImageService/PullImage
	Oct 18 12:18:01 embed-certs-175371 crio[769]: time="2025-10-18T12:18:01.812852048Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 12:18:03 embed-certs-175371 crio[769]: time="2025-10-18T12:18:03.20456688Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=dfc644e0-71bc-4ce7-ad3d-b185895931b3 name=/runtime.v1.ImageService/PullImage
	Oct 18 12:18:03 embed-certs-175371 crio[769]: time="2025-10-18T12:18:03.205350543Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=53e859db-6d6a-4223-9c21-a522d5acfaf4 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:03 embed-certs-175371 crio[769]: time="2025-10-18T12:18:03.207015613Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ceed6227-0d13-471e-bbeb-86da91e0dce8 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:03 embed-certs-175371 crio[769]: time="2025-10-18T12:18:03.210882949Z" level=info msg="Creating container: default/busybox/busybox" id=a4275969-b683-4762-8bc0-a8bea6ef4004 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:03 embed-certs-175371 crio[769]: time="2025-10-18T12:18:03.211731463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:03 embed-certs-175371 crio[769]: time="2025-10-18T12:18:03.216349604Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:03 embed-certs-175371 crio[769]: time="2025-10-18T12:18:03.216904319Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:03 embed-certs-175371 crio[769]: time="2025-10-18T12:18:03.250490798Z" level=info msg="Created container c39b82781a676ed55d8dc0a7879f62c46a83158e5a0755f2df51a21b3f9b8e6e: default/busybox/busybox" id=a4275969-b683-4762-8bc0-a8bea6ef4004 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:03 embed-certs-175371 crio[769]: time="2025-10-18T12:18:03.251247054Z" level=info msg="Starting container: c39b82781a676ed55d8dc0a7879f62c46a83158e5a0755f2df51a21b3f9b8e6e" id=46f6817a-e65f-444f-bbdd-310b081a1dd1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:18:03 embed-certs-175371 crio[769]: time="2025-10-18T12:18:03.253404315Z" level=info msg="Started container" PID=1905 containerID=c39b82781a676ed55d8dc0a7879f62c46a83158e5a0755f2df51a21b3f9b8e6e description=default/busybox/busybox id=46f6817a-e65f-444f-bbdd-310b081a1dd1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f847a3ad3ad741aa034cb37d245db4bdfac7824cef51cda9124caf2531da3a24
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	c39b82781a676       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago        Running             busybox                   0                   f847a3ad3ad74       busybox                                      default
	f8dd5362a6676       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago       Running             coredns                   0                   b84582e84e9d9       coredns-66bc5c9577-b6h9l                     kube-system
	7f357bdc8f42e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago       Running             storage-provisioner       0                   0c3621936920f       storage-provisioner                          kube-system
	d03e1ff1db00b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      53 seconds ago       Running             kindnet-cni               0                   59905e30c2ae3       kindnet-dxw8r                                kube-system
	cbe19111e5d8d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      53 seconds ago       Running             kube-proxy                0                   37b9b06b195b5       kube-proxy-t2x4c                             kube-system
	fd5c9975146e3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            0                   e8c1343ec9a90       kube-scheduler-embed-certs-175371            kube-system
	ef4e55197fc2e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      0                   b02923b1cf76b       etcd-embed-certs-175371                      kube-system
	4db4cb6f6a07d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            0                   9088bbcf528e4       kube-apiserver-embed-certs-175371            kube-system
	540df91d3c88e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   0                   b4f2bc66464b3       kube-controller-manager-embed-certs-175371   kube-system
	
	
	==> coredns [f8dd5362a667694cddbc9aa1ae78ce3214eb87eeb045aae8fbe72989fe033a67] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49607 - 16093 "HINFO IN 1239910321325234120.6403829648821156917. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019262171s
	
	
	==> describe nodes <==
	Name:               embed-certs-175371
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-175371
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=embed-certs-175371
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_17_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:17:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-175371
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:18:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:17:58 +0000   Sat, 18 Oct 2025 12:17:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:17:58 +0000   Sat, 18 Oct 2025 12:17:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:17:58 +0000   Sat, 18 Oct 2025 12:17:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:17:58 +0000   Sat, 18 Oct 2025 12:17:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-175371
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                d2c06e1f-4c4f-4264-8151-34f2c71eddce
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-b6h9l                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     53s
	  kube-system                 etcd-embed-certs-175371                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         59s
	  kube-system                 kindnet-dxw8r                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-embed-certs-175371             250m (3%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-embed-certs-175371    200m (2%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-t2x4c                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-embed-certs-175371             100m (1%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 53s                kube-proxy       
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node embed-certs-175371 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node embed-certs-175371 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x8 over 64s)  kubelet          Node embed-certs-175371 status is now: NodeHasSufficientPID
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s                kubelet          Node embed-certs-175371 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s                kubelet          Node embed-certs-175371 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s                kubelet          Node embed-certs-175371 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                node-controller  Node embed-certs-175371 event: Registered Node embed-certs-175371 in Controller
	  Normal  NodeReady                13s                kubelet          Node embed-certs-175371 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +11.948953] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 93 07 de 40 6d 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 2f a5 3a 37 fc 08 06
	[  +0.204454] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 4b 47 1f ce e5 08 06
	[Oct18 12:16] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 88 62 1b dd a7 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 f1 aa 42 b3 1d 08 06
	[  +0.000901] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +26.035563] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 9e 15 3f 0e e1 08 06
	[  +0.000631] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 55 46 ae a1 7f 08 06
	[  +2.492998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 63 10 7e 7b f1 08 06
	[  +0.001695] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	[ +18.118461] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e eb 77 72 c6 18 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	
	
	==> etcd [ef4e55197fc2e7fde6f627c84d9f18340303e109c47699b2115dffc428d05bd7] <==
	{"level":"warn","ts":"2025-10-18T12:17:09.217012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:09.224506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:09.232654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:09.240441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:09.247468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:09.254832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:09.262931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:09.270463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:09.286113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:09.303550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:09.362113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:13.406184Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.340769ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-embed-certs-175371\" limit:1 ","response":"range_response_count:1 size:4424"}
	{"level":"info","ts":"2025-10-18T12:17:13.406299Z","caller":"traceutil/trace.go:172","msg":"trace[2023947792] range","detail":"{range_begin:/registry/pods/kube-system/etcd-embed-certs-175371; range_end:; response_count:1; response_revision:302; }","duration":"104.485749ms","start":"2025-10-18T12:17:13.301801Z","end":"2025-10-18T12:17:13.406287Z","steps":["trace[2023947792] 'agreement among raft nodes before linearized reading'  (duration: 61.794884ms)","trace[2023947792] 'range keys from in-memory index tree'  (duration: 42.457176ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T12:17:13.406242Z","caller":"traceutil/trace.go:172","msg":"trace[989040621] transaction","detail":"{read_only:false; number_of_response:0; response_revision:302; }","duration":"131.981628ms","start":"2025-10-18T12:17:13.274210Z","end":"2025-10-18T12:17:13.406191Z","steps":["trace[989040621] 'process raft request'  (duration: 89.450874ms)","trace[989040621] 'compare'  (duration: 42.396171ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T12:17:13.406230Z","caller":"traceutil/trace.go:172","msg":"trace[1399067911] transaction","detail":"{read_only:false; number_of_response:0; response_revision:302; }","duration":"131.602377ms","start":"2025-10-18T12:17:13.274603Z","end":"2025-10-18T12:17:13.406206Z","steps":["trace[1399067911] 'process raft request'  (duration: 131.508927ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:17:13.570752Z","caller":"traceutil/trace.go:172","msg":"trace[37903077] transaction","detail":"{read_only:false; response_revision:306; number_of_response:1; }","duration":"155.54437ms","start":"2025-10-18T12:17:13.415183Z","end":"2025-10-18T12:17:13.570728Z","steps":["trace[37903077] 'process raft request'  (duration: 132.156186ms)","trace[37903077] 'compare'  (duration: 23.265387ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T12:17:13.592960Z","caller":"traceutil/trace.go:172","msg":"trace[796109787] transaction","detail":"{read_only:false; response_revision:307; number_of_response:1; }","duration":"114.262828ms","start":"2025-10-18T12:17:13.478677Z","end":"2025-10-18T12:17:13.592940Z","steps":["trace[796109787] 'process raft request'  (duration: 114.169913ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:17:13.725812Z","caller":"traceutil/trace.go:172","msg":"trace[1261886438] transaction","detail":"{read_only:false; response_revision:309; number_of_response:1; }","duration":"124.20665ms","start":"2025-10-18T12:17:13.601583Z","end":"2025-10-18T12:17:13.725790Z","steps":["trace[1261886438] 'process raft request'  (duration: 124.116278ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:17:13.725819Z","caller":"traceutil/trace.go:172","msg":"trace[1163786078] transaction","detail":"{read_only:false; response_revision:308; number_of_response:1; }","duration":"129.88695ms","start":"2025-10-18T12:17:13.595903Z","end":"2025-10-18T12:17:13.725790Z","steps":["trace[1163786078] 'process raft request'  (duration: 126.93943ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:17:13.870051Z","caller":"traceutil/trace.go:172","msg":"trace[1566941978] transaction","detail":"{read_only:false; response_revision:311; number_of_response:1; }","duration":"133.161609ms","start":"2025-10-18T12:17:13.736874Z","end":"2025-10-18T12:17:13.870035Z","steps":["trace[1566941978] 'process raft request'  (duration: 133.128285ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:17:13.870111Z","caller":"traceutil/trace.go:172","msg":"trace[1152691629] transaction","detail":"{read_only:false; response_revision:310; number_of_response:1; }","duration":"134.95969ms","start":"2025-10-18T12:17:13.735115Z","end":"2025-10-18T12:17:13.870075Z","steps":["trace[1152691629] 'process raft request'  (duration: 126.577563ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:17:14.104898Z","caller":"traceutil/trace.go:172","msg":"trace[840923946] transaction","detail":"{read_only:false; response_revision:314; number_of_response:1; }","duration":"130.997589ms","start":"2025-10-18T12:17:13.973877Z","end":"2025-10-18T12:17:14.104874Z","steps":["trace[840923946] 'process raft request'  (duration: 83.371386ms)","trace[840923946] 'compare'  (duration: 47.459314ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T12:17:14.436195Z","caller":"traceutil/trace.go:172","msg":"trace[2119900607] transaction","detail":"{read_only:false; response_revision:316; number_of_response:1; }","duration":"126.867601ms","start":"2025-10-18T12:17:14.309296Z","end":"2025-10-18T12:17:14.436163Z","steps":["trace[2119900607] 'process raft request'  (duration: 126.74741ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:17:15.059544Z","caller":"traceutil/trace.go:172","msg":"trace[810605174] transaction","detail":"{read_only:false; response_revision:320; number_of_response:1; }","duration":"122.232712ms","start":"2025-10-18T12:17:14.937290Z","end":"2025-10-18T12:17:15.059523Z","steps":["trace[810605174] 'process raft request'  (duration: 122.086363ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:17:15.325359Z","caller":"traceutil/trace.go:172","msg":"trace[1036949219] transaction","detail":"{read_only:false; response_revision:322; number_of_response:1; }","duration":"101.385881ms","start":"2025-10-18T12:17:15.223955Z","end":"2025-10-18T12:17:15.325340Z","steps":["trace[1036949219] 'process raft request'  (duration: 101.238986ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:18:11 up  1:00,  0 user,  load average: 4.65, 4.22, 2.61
	Linux embed-certs-175371 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d03e1ff1db00b938ec2cb74b4389169edba86dbc6f03797c2a2bfaf6aad43fb0] <==
	I1018 12:17:18.412883       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:17:18.413487       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 12:17:18.413897       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:17:18.413975       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:17:18.414025       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:17:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:17:18.617313       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:17:18.617344       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:17:18.617355       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:17:18.617488       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 12:17:48.618034       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 12:17:48.618034       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 12:17:48.618034       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 12:17:48.618066       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 12:17:49.917578       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:17:49.917627       1 metrics.go:72] Registering metrics
	I1018 12:17:49.917697       1 controller.go:711] "Syncing nftables rules"
	I1018 12:17:58.621040       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 12:17:58.621086       1 main.go:301] handling current node
	I1018 12:18:08.619858       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 12:18:08.619895       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4db4cb6f6a07d4d8c8ab58258f5d4a916c450f62e5cd530175e9a4d817458a84] <==
	I1018 12:17:09.959782       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 12:17:09.959797       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 12:17:09.959837       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:17:09.963458       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 12:17:09.969313       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 12:17:09.991593       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:17:10.149304       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:17:10.847482       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 12:17:10.851235       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 12:17:10.851257       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:17:11.452852       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:17:11.499132       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:17:11.549863       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 12:17:11.556372       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 12:17:11.557388       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:17:11.561961       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:17:11.876150       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:17:12.406164       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:17:12.417380       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 12:17:12.428517       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:17:17.779895       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 12:17:17.880478       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:17:17.885341       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:17:17.928431       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1018 12:18:09.586562       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:49904: use of closed network connection
	
	
	==> kube-controller-manager [540df91d3c88ea5a46e4612724b1df59c3abcaa4cf0ed9d3af64b354fd0d5faf] <==
	I1018 12:17:16.872114       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:17:16.875583       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 12:17:16.875592       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 12:17:16.875723       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 12:17:16.875730       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 12:17:16.876816       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:17:16.876834       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 12:17:16.876871       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:17:16.876913       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 12:17:16.876918       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 12:17:16.877058       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 12:17:16.877092       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 12:17:16.877213       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 12:17:16.881621       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 12:17:16.881707       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 12:17:16.881778       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 12:17:16.881794       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 12:17:16.881802       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 12:17:16.887949       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:17:16.890375       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-175371" podCIDRs=["10.244.0.0/24"]
	I1018 12:17:16.893103       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 12:17:16.900448       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 12:17:16.901608       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 12:17:16.906946       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:18:01.831270       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [cbe19111e5d8d2ee7216cb6854d3c2c9b070416dc341abb74090a064775dffb6] <==
	I1018 12:17:18.247645       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:17:18.320564       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:17:18.420690       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:17:18.420836       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 12:17:18.420965       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:17:18.444273       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:17:18.444342       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:17:18.455299       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:17:18.455871       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:17:18.455972       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:17:18.458009       1 config.go:200] "Starting service config controller"
	I1018 12:17:18.458270       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:17:18.458108       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:17:18.458310       1 config.go:309] "Starting node config controller"
	I1018 12:17:18.458326       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:17:18.458315       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:17:18.458119       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:17:18.458438       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:17:18.559293       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:17:18.559324       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:17:18.559334       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:17:18.559353       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fd5c9975146e3860ae36a41d2237bcaa8c50b64dc80a086dae5844e81374afe7] <==
	E1018 12:17:09.913812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:17:09.913850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:17:09.913673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:17:09.913687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:17:09.913895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:17:09.913905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:17:09.913988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:17:09.913992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:17:09.913715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:17:09.914115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:17:09.914115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:17:10.738744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:17:10.744960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:17:10.806546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:17:10.810045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:17:10.894881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:17:10.913737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:17:11.032559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:17:11.054023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:17:11.088313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:17:11.126017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:17:11.135484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:17:11.153897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:17:11.474422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1018 12:17:13.410713       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:17:13 embed-certs-175371 kubelet[1297]: E1018 12:17:13.409583    1297 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-175371\" already exists" pod="kube-system/kube-apiserver-embed-certs-175371"
	Oct 18 12:17:13 embed-certs-175371 kubelet[1297]: I1018 12:17:13.594425    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-175371" podStartSLOduration=1.5943995260000001 podStartE2EDuration="1.594399526s" podCreationTimestamp="2025-10-18 12:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:13.409334597 +0000 UTC m=+1.261730291" watchObservedRunningTime="2025-10-18 12:17:13.594399526 +0000 UTC m=+1.446795218"
	Oct 18 12:17:13 embed-certs-175371 kubelet[1297]: I1018 12:17:13.727064    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-175371" podStartSLOduration=2.727040202 podStartE2EDuration="2.727040202s" podCreationTimestamp="2025-10-18 12:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:13.594605751 +0000 UTC m=+1.447001438" watchObservedRunningTime="2025-10-18 12:17:13.727040202 +0000 UTC m=+1.579435896"
	Oct 18 12:17:13 embed-certs-175371 kubelet[1297]: I1018 12:17:13.871513    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-175371" podStartSLOduration=1.871488426 podStartE2EDuration="1.871488426s" podCreationTimestamp="2025-10-18 12:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:13.871406821 +0000 UTC m=+1.723802515" watchObservedRunningTime="2025-10-18 12:17:13.871488426 +0000 UTC m=+1.723884119"
	Oct 18 12:17:13 embed-certs-175371 kubelet[1297]: I1018 12:17:13.871614    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-175371" podStartSLOduration=1.871605242 podStartE2EDuration="1.871605242s" podCreationTimestamp="2025-10-18 12:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:13.727691947 +0000 UTC m=+1.580087642" watchObservedRunningTime="2025-10-18 12:17:13.871605242 +0000 UTC m=+1.724000936"
	Oct 18 12:17:16 embed-certs-175371 kubelet[1297]: I1018 12:17:16.991838    1297 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 12:17:16 embed-certs-175371 kubelet[1297]: I1018 12:17:16.992565    1297 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 12:17:17 embed-certs-175371 kubelet[1297]: I1018 12:17:17.863163    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da-cni-cfg\") pod \"kindnet-dxw8r\" (UID: \"c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da\") " pod="kube-system/kindnet-dxw8r"
	Oct 18 12:17:17 embed-certs-175371 kubelet[1297]: I1018 12:17:17.863216    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da-xtables-lock\") pod \"kindnet-dxw8r\" (UID: \"c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da\") " pod="kube-system/kindnet-dxw8r"
	Oct 18 12:17:17 embed-certs-175371 kubelet[1297]: I1018 12:17:17.863252    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da-lib-modules\") pod \"kindnet-dxw8r\" (UID: \"c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da\") " pod="kube-system/kindnet-dxw8r"
	Oct 18 12:17:17 embed-certs-175371 kubelet[1297]: I1018 12:17:17.863274    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d5ade84-59a3-4948-ba28-a6663bd749ab-xtables-lock\") pod \"kube-proxy-t2x4c\" (UID: \"9d5ade84-59a3-4948-ba28-a6663bd749ab\") " pod="kube-system/kube-proxy-t2x4c"
	Oct 18 12:17:17 embed-certs-175371 kubelet[1297]: I1018 12:17:17.863323    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d5ade84-59a3-4948-ba28-a6663bd749ab-kube-proxy\") pod \"kube-proxy-t2x4c\" (UID: \"9d5ade84-59a3-4948-ba28-a6663bd749ab\") " pod="kube-system/kube-proxy-t2x4c"
	Oct 18 12:17:17 embed-certs-175371 kubelet[1297]: I1018 12:17:17.863360    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d5ade84-59a3-4948-ba28-a6663bd749ab-lib-modules\") pod \"kube-proxy-t2x4c\" (UID: \"9d5ade84-59a3-4948-ba28-a6663bd749ab\") " pod="kube-system/kube-proxy-t2x4c"
	Oct 18 12:17:17 embed-certs-175371 kubelet[1297]: I1018 12:17:17.863401    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhw9v\" (UniqueName: \"kubernetes.io/projected/c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da-kube-api-access-xhw9v\") pod \"kindnet-dxw8r\" (UID: \"c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da\") " pod="kube-system/kindnet-dxw8r"
	Oct 18 12:17:17 embed-certs-175371 kubelet[1297]: I1018 12:17:17.863437    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jslfn\" (UniqueName: \"kubernetes.io/projected/9d5ade84-59a3-4948-ba28-a6663bd749ab-kube-api-access-jslfn\") pod \"kube-proxy-t2x4c\" (UID: \"9d5ade84-59a3-4948-ba28-a6663bd749ab\") " pod="kube-system/kube-proxy-t2x4c"
	Oct 18 12:17:18 embed-certs-175371 kubelet[1297]: I1018 12:17:18.302378    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t2x4c" podStartSLOduration=1.302353951 podStartE2EDuration="1.302353951s" podCreationTimestamp="2025-10-18 12:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:18.302065297 +0000 UTC m=+6.154460991" watchObservedRunningTime="2025-10-18 12:17:18.302353951 +0000 UTC m=+6.154749646"
	Oct 18 12:17:21 embed-certs-175371 kubelet[1297]: I1018 12:17:21.311824    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dxw8r" podStartSLOduration=4.31180004 podStartE2EDuration="4.31180004s" podCreationTimestamp="2025-10-18 12:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:18.330862637 +0000 UTC m=+6.183258331" watchObservedRunningTime="2025-10-18 12:17:21.31180004 +0000 UTC m=+9.164195734"
	Oct 18 12:17:58 embed-certs-175371 kubelet[1297]: I1018 12:17:58.740517    1297 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 12:17:58 embed-certs-175371 kubelet[1297]: I1018 12:17:58.874308    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d598f5a5-5d3e-4ad8-9266-ea4fee4648c7-tmp\") pod \"storage-provisioner\" (UID: \"d598f5a5-5d3e-4ad8-9266-ea4fee4648c7\") " pod="kube-system/storage-provisioner"
	Oct 18 12:17:58 embed-certs-175371 kubelet[1297]: I1018 12:17:58.874411    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf0c7f4f-476e-4faf-9159-580059735927-config-volume\") pod \"coredns-66bc5c9577-b6h9l\" (UID: \"bf0c7f4f-476e-4faf-9159-580059735927\") " pod="kube-system/coredns-66bc5c9577-b6h9l"
	Oct 18 12:17:58 embed-certs-175371 kubelet[1297]: I1018 12:17:58.874451    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldjcn\" (UniqueName: \"kubernetes.io/projected/bf0c7f4f-476e-4faf-9159-580059735927-kube-api-access-ldjcn\") pod \"coredns-66bc5c9577-b6h9l\" (UID: \"bf0c7f4f-476e-4faf-9159-580059735927\") " pod="kube-system/coredns-66bc5c9577-b6h9l"
	Oct 18 12:17:58 embed-certs-175371 kubelet[1297]: I1018 12:17:58.874482    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gpnb\" (UniqueName: \"kubernetes.io/projected/d598f5a5-5d3e-4ad8-9266-ea4fee4648c7-kube-api-access-4gpnb\") pod \"storage-provisioner\" (UID: \"d598f5a5-5d3e-4ad8-9266-ea4fee4648c7\") " pod="kube-system/storage-provisioner"
	Oct 18 12:17:59 embed-certs-175371 kubelet[1297]: I1018 12:17:59.392976    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-b6h9l" podStartSLOduration=41.392953096 podStartE2EDuration="41.392953096s" podCreationTimestamp="2025-10-18 12:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:59.392601282 +0000 UTC m=+47.244996976" watchObservedRunningTime="2025-10-18 12:17:59.392953096 +0000 UTC m=+47.245348787"
	Oct 18 12:17:59 embed-certs-175371 kubelet[1297]: I1018 12:17:59.403376    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.403332959 podStartE2EDuration="41.403332959s" podCreationTimestamp="2025-10-18 12:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:17:59.403176513 +0000 UTC m=+47.255572206" watchObservedRunningTime="2025-10-18 12:17:59.403332959 +0000 UTC m=+47.255728654"
	Oct 18 12:18:01 embed-certs-175371 kubelet[1297]: I1018 12:18:01.593123    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgp2c\" (UniqueName: \"kubernetes.io/projected/d7e2785e-4860-4f2d-af78-a6a7770e8f29-kube-api-access-vgp2c\") pod \"busybox\" (UID: \"d7e2785e-4860-4f2d-af78-a6a7770e8f29\") " pod="default/busybox"
	
	
	==> storage-provisioner [7f357bdc8f42e06795c6795f44e1e323c30adc829cb16516042fa0bf28e44120] <==
	I1018 12:17:59.132536       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 12:17:59.142287       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 12:17:59.142353       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 12:17:59.145026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:59.157676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:17:59.157910       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 12:17:59.158163       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-175371_75ee1ff2-00ac-44e2-b74f-e0e008e7200d!
	I1018 12:17:59.159435       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5075b3f2-7e93-4c37-98dd-c9faa2e4aa50", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-175371_75ee1ff2-00ac-44e2-b74f-e0e008e7200d became leader
	W1018 12:17:59.166684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:17:59.186435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:17:59.258891       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-175371_75ee1ff2-00ac-44e2-b74f-e0e008e7200d!
	W1018 12:18:01.190573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:01.199734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:03.203381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:03.208151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:05.212640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:05.230148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:07.233855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:07.238652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:09.242987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:09.248842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:11.252336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:11.263263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-175371 -n embed-certs-175371
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-175371 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-406541 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-406541 --alsologtostderr -v=1: exit status 80 (2.631650265s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-406541 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:18:43.572112  322157 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:18:43.572413  322157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:43.572423  322157 out.go:374] Setting ErrFile to fd 2...
	I1018 12:18:43.572430  322157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:43.572654  322157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:18:43.572959  322157 out.go:368] Setting JSON to false
	I1018 12:18:43.573001  322157 mustload.go:65] Loading cluster: no-preload-406541
	I1018 12:18:43.573373  322157 config.go:182] Loaded profile config "no-preload-406541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:43.573779  322157 cli_runner.go:164] Run: docker container inspect no-preload-406541 --format={{.State.Status}}
	I1018 12:18:43.593327  322157 host.go:66] Checking if "no-preload-406541" exists ...
	I1018 12:18:43.593673  322157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:43.655360  322157 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-18 12:18:43.646030139 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:43.656006  322157 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-406541 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 12:18:43.658075  322157 out.go:179] * Pausing node no-preload-406541 ... 
	I1018 12:18:43.659321  322157 host.go:66] Checking if "no-preload-406541" exists ...
	I1018 12:18:43.659561  322157 ssh_runner.go:195] Run: systemctl --version
	I1018 12:18:43.659594  322157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-406541
	I1018 12:18:43.678155  322157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/no-preload-406541/id_rsa Username:docker}
	I1018 12:18:43.778676  322157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:43.793382  322157 pause.go:52] kubelet running: true
	I1018 12:18:43.793441  322157 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:18:43.982249  322157 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:18:43.982348  322157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:18:44.056879  322157 cri.go:89] found id: "62d512662ad1ee0b6a671a7817864180d3148e6813aaeaa115a934796a423076"
	I1018 12:18:44.056905  322157 cri.go:89] found id: "bf4962a6a3ad256176dfa5ae96b9a87a6ed571246e8433b9f043ab17f752c961"
	I1018 12:18:44.056910  322157 cri.go:89] found id: "40786b0420f7a144665a1f103ad3f606cd6cabf7bf47ebe88741837fb573232b"
	I1018 12:18:44.056915  322157 cri.go:89] found id: "9b0a2248d2179ef0842e69ec0fb3d1c0118e01bfa03af00785477b05bbf28109"
	I1018 12:18:44.056920  322157 cri.go:89] found id: "eeb9a7b0a2689ceb5e5446d2d318c44949119ed381f76cb943c969ada5e7480d"
	I1018 12:18:44.056924  322157 cri.go:89] found id: "5d618e751f9ba92d0e9b73cc902c60091fa7fc312b17c0a534306ddf5267331e"
	I1018 12:18:44.056929  322157 cri.go:89] found id: "133fd0664569cae2a09912a39da9ebed72def40b96fa66996c7f6cbd105deba3"
	I1018 12:18:44.056933  322157 cri.go:89] found id: "37d2f600fcf0c009e16115908271757cab49845434c4b2db0ade3132da9aaff7"
	I1018 12:18:44.056937  322157 cri.go:89] found id: "786f9a8bc0ec93e60a032d4b983f3c3c2cd05a95a06cfa33a7e7a81ed64a5f13"
	I1018 12:18:44.056953  322157 cri.go:89] found id: "2f228a114994354e92d8570f64381531a41496d20ad84389b5b4d0deb9fad3ec"
	I1018 12:18:44.056961  322157 cri.go:89] found id: "d8afd7c12527a3cd1abb0b05cf7514d555f1c3d34293776ee0abc22dfa7847ed"
	I1018 12:18:44.056965  322157 cri.go:89] found id: ""
	I1018 12:18:44.057019  322157 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:18:44.070020  322157 retry.go:31] will retry after 142.620557ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:44Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:18:44.213422  322157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:44.228164  322157 pause.go:52] kubelet running: false
	I1018 12:18:44.228245  322157 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:18:44.415693  322157 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:18:44.415804  322157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:18:44.492546  322157 cri.go:89] found id: "62d512662ad1ee0b6a671a7817864180d3148e6813aaeaa115a934796a423076"
	I1018 12:18:44.492572  322157 cri.go:89] found id: "bf4962a6a3ad256176dfa5ae96b9a87a6ed571246e8433b9f043ab17f752c961"
	I1018 12:18:44.492578  322157 cri.go:89] found id: "40786b0420f7a144665a1f103ad3f606cd6cabf7bf47ebe88741837fb573232b"
	I1018 12:18:44.492583  322157 cri.go:89] found id: "9b0a2248d2179ef0842e69ec0fb3d1c0118e01bfa03af00785477b05bbf28109"
	I1018 12:18:44.492587  322157 cri.go:89] found id: "eeb9a7b0a2689ceb5e5446d2d318c44949119ed381f76cb943c969ada5e7480d"
	I1018 12:18:44.492591  322157 cri.go:89] found id: "5d618e751f9ba92d0e9b73cc902c60091fa7fc312b17c0a534306ddf5267331e"
	I1018 12:18:44.492594  322157 cri.go:89] found id: "133fd0664569cae2a09912a39da9ebed72def40b96fa66996c7f6cbd105deba3"
	I1018 12:18:44.492598  322157 cri.go:89] found id: "37d2f600fcf0c009e16115908271757cab49845434c4b2db0ade3132da9aaff7"
	I1018 12:18:44.492602  322157 cri.go:89] found id: "786f9a8bc0ec93e60a032d4b983f3c3c2cd05a95a06cfa33a7e7a81ed64a5f13"
	I1018 12:18:44.492620  322157 cri.go:89] found id: "2f228a114994354e92d8570f64381531a41496d20ad84389b5b4d0deb9fad3ec"
	I1018 12:18:44.492629  322157 cri.go:89] found id: "d8afd7c12527a3cd1abb0b05cf7514d555f1c3d34293776ee0abc22dfa7847ed"
	I1018 12:18:44.492633  322157 cri.go:89] found id: ""
	I1018 12:18:44.492679  322157 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:18:44.505929  322157 retry.go:31] will retry after 374.484171ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:44Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:18:44.881581  322157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:44.894991  322157 pause.go:52] kubelet running: false
	I1018 12:18:44.895041  322157 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:18:45.059204  322157 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:18:45.059267  322157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:18:45.126212  322157 cri.go:89] found id: "62d512662ad1ee0b6a671a7817864180d3148e6813aaeaa115a934796a423076"
	I1018 12:18:45.126241  322157 cri.go:89] found id: "bf4962a6a3ad256176dfa5ae96b9a87a6ed571246e8433b9f043ab17f752c961"
	I1018 12:18:45.126247  322157 cri.go:89] found id: "40786b0420f7a144665a1f103ad3f606cd6cabf7bf47ebe88741837fb573232b"
	I1018 12:18:45.126251  322157 cri.go:89] found id: "9b0a2248d2179ef0842e69ec0fb3d1c0118e01bfa03af00785477b05bbf28109"
	I1018 12:18:45.126256  322157 cri.go:89] found id: "eeb9a7b0a2689ceb5e5446d2d318c44949119ed381f76cb943c969ada5e7480d"
	I1018 12:18:45.126259  322157 cri.go:89] found id: "5d618e751f9ba92d0e9b73cc902c60091fa7fc312b17c0a534306ddf5267331e"
	I1018 12:18:45.126263  322157 cri.go:89] found id: "133fd0664569cae2a09912a39da9ebed72def40b96fa66996c7f6cbd105deba3"
	I1018 12:18:45.126267  322157 cri.go:89] found id: "37d2f600fcf0c009e16115908271757cab49845434c4b2db0ade3132da9aaff7"
	I1018 12:18:45.126271  322157 cri.go:89] found id: "786f9a8bc0ec93e60a032d4b983f3c3c2cd05a95a06cfa33a7e7a81ed64a5f13"
	I1018 12:18:45.126290  322157 cri.go:89] found id: "2f228a114994354e92d8570f64381531a41496d20ad84389b5b4d0deb9fad3ec"
	I1018 12:18:45.126294  322157 cri.go:89] found id: "d8afd7c12527a3cd1abb0b05cf7514d555f1c3d34293776ee0abc22dfa7847ed"
	I1018 12:18:45.126298  322157 cri.go:89] found id: ""
	I1018 12:18:45.126345  322157 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:18:45.141200  322157 retry.go:31] will retry after 633.342096ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:45Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:18:45.774894  322157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:45.794113  322157 pause.go:52] kubelet running: false
	I1018 12:18:45.794177  322157 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:18:46.025113  322157 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:18:46.025320  322157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:18:46.123932  322157 cri.go:89] found id: "62d512662ad1ee0b6a671a7817864180d3148e6813aaeaa115a934796a423076"
	I1018 12:18:46.124036  322157 cri.go:89] found id: "bf4962a6a3ad256176dfa5ae96b9a87a6ed571246e8433b9f043ab17f752c961"
	I1018 12:18:46.124046  322157 cri.go:89] found id: "40786b0420f7a144665a1f103ad3f606cd6cabf7bf47ebe88741837fb573232b"
	I1018 12:18:46.124051  322157 cri.go:89] found id: "9b0a2248d2179ef0842e69ec0fb3d1c0118e01bfa03af00785477b05bbf28109"
	I1018 12:18:46.124063  322157 cri.go:89] found id: "eeb9a7b0a2689ceb5e5446d2d318c44949119ed381f76cb943c969ada5e7480d"
	I1018 12:18:46.124068  322157 cri.go:89] found id: "5d618e751f9ba92d0e9b73cc902c60091fa7fc312b17c0a534306ddf5267331e"
	I1018 12:18:46.124072  322157 cri.go:89] found id: "133fd0664569cae2a09912a39da9ebed72def40b96fa66996c7f6cbd105deba3"
	I1018 12:18:46.124076  322157 cri.go:89] found id: "37d2f600fcf0c009e16115908271757cab49845434c4b2db0ade3132da9aaff7"
	I1018 12:18:46.124080  322157 cri.go:89] found id: "786f9a8bc0ec93e60a032d4b983f3c3c2cd05a95a06cfa33a7e7a81ed64a5f13"
	I1018 12:18:46.124101  322157 cri.go:89] found id: "2f228a114994354e92d8570f64381531a41496d20ad84389b5b4d0deb9fad3ec"
	I1018 12:18:46.124105  322157 cri.go:89] found id: "d8afd7c12527a3cd1abb0b05cf7514d555f1c3d34293776ee0abc22dfa7847ed"
	I1018 12:18:46.124109  322157 cri.go:89] found id: ""
	I1018 12:18:46.124156  322157 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:18:46.143244  322157 out.go:203] 
	W1018 12:18:46.144689  322157 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:18:46.144714  322157 out.go:285] * 
	* 
	W1018 12:18:46.150973  322157 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:18:46.152475  322157 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-406541 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-406541
helpers_test.go:243: (dbg) docker inspect no-preload-406541:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193",
	        "Created": "2025-10-18T12:16:27.38049252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 310719,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:17:46.056629542Z",
	            "FinishedAt": "2025-10-18T12:17:45.214384513Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193/hostname",
	        "HostsPath": "/var/lib/docker/containers/3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193/hosts",
	        "LogPath": "/var/lib/docker/containers/3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193/3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193-json.log",
	        "Name": "/no-preload-406541",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-406541:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-406541",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193",
	                "LowerDir": "/var/lib/docker/overlay2/452b7a0353cc5fb49e7b2dc67c3eec0928606c730e569bf04fd69beda34a8483-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/452b7a0353cc5fb49e7b2dc67c3eec0928606c730e569bf04fd69beda34a8483/merged",
	                "UpperDir": "/var/lib/docker/overlay2/452b7a0353cc5fb49e7b2dc67c3eec0928606c730e569bf04fd69beda34a8483/diff",
	                "WorkDir": "/var/lib/docker/overlay2/452b7a0353cc5fb49e7b2dc67c3eec0928606c730e569bf04fd69beda34a8483/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-406541",
	                "Source": "/var/lib/docker/volumes/no-preload-406541/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-406541",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-406541",
	                "name.minikube.sigs.k8s.io": "no-preload-406541",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8544c1ba9b3b88dba7e7ac1dcca0a0c80468b3a84acde8b893cacbc7caaa8fc1",
	            "SandboxKey": "/var/run/docker/netns/8544c1ba9b3b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-406541": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:25:96:e9:d1:85",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dc7610ce545693ef1e28eeee1b4922dd1bc5e4eb71b054fa064c5359b8ecf50a",
	                    "EndpointID": "7befa15c15e950ac9859cbb42744c22233d614b6a32baae23b901de5aa3e1a8f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-406541",
	                        "3111cdfbd44a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-406541 -n no-preload-406541
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-406541 -n no-preload-406541: exit status 2 (417.232509ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-406541 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-406541 logs -n 25: (1.374834821s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-376567 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo crio config                                                                                                                                                                                                             │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ delete  │ -p bridge-376567                                                                                                                                                                                                                              │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ delete  │ -p disable-driver-mounts-200198                                                                                                                                                                                                               │ disable-driver-mounts-200198 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-024443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p old-k8s-version-024443 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-406541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p no-preload-406541 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-024443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p old-k8s-version-024443 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable dashboard -p no-preload-406541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p no-preload-406541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-028309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-028309 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-175371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ stop    │ -p embed-certs-175371 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-028309 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-175371 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p embed-certs-175371 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ image   │ no-preload-406541 image list --format=json                                                                                                                                                                                                    │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p no-preload-406541 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ image   │ old-k8s-version-024443 image list --format=json                                                                                                                                                                                               │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p old-k8s-version-024443 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:18:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:18:30.700052  319485 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:18:30.700328  319485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:30.700338  319485 out.go:374] Setting ErrFile to fd 2...
	I1018 12:18:30.700342  319485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:30.700573  319485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:18:30.701112  319485 out.go:368] Setting JSON to false
	I1018 12:18:30.702451  319485 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3659,"bootTime":1760786252,"procs":428,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:18:30.702547  319485 start.go:141] virtualization: kvm guest
	I1018 12:18:30.704614  319485 out.go:179] * [embed-certs-175371] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:18:30.706016  319485 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:18:30.706038  319485 notify.go:220] Checking for updates...
	I1018 12:18:30.708920  319485 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:18:30.710890  319485 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:30.712258  319485 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:18:30.713409  319485 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:18:30.714965  319485 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:18:30.716835  319485 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:30.717456  319485 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:18:30.741640  319485 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:18:30.741748  319485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:30.802733  319485 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:18:30.790905861 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:30.802866  319485 docker.go:318] overlay module found
	I1018 12:18:30.805106  319485 out.go:179] * Using the docker driver based on existing profile
	W1018 12:18:26.415356  310517 pod_ready.go:104] pod "coredns-66bc5c9577-bwvrq" is not "Ready", error: <nil>
	W1018 12:18:28.908743  310517 pod_ready.go:104] pod "coredns-66bc5c9577-bwvrq" is not "Ready", error: <nil>
	I1018 12:18:30.410244  310517 pod_ready.go:94] pod "coredns-66bc5c9577-bwvrq" is "Ready"
	I1018 12:18:30.410272  310517 pod_ready.go:86] duration metric: took 33.006670577s for pod "coredns-66bc5c9577-bwvrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.413489  310517 pod_ready.go:83] waiting for pod "etcd-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.418087  310517 pod_ready.go:94] pod "etcd-no-preload-406541" is "Ready"
	I1018 12:18:30.418113  310517 pod_ready.go:86] duration metric: took 4.60176ms for pod "etcd-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.420752  310517 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.425914  310517 pod_ready.go:94] pod "kube-apiserver-no-preload-406541" is "Ready"
	I1018 12:18:30.425945  310517 pod_ready.go:86] duration metric: took 5.137183ms for pod "kube-apiserver-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.430423  310517 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.608129  310517 pod_ready.go:94] pod "kube-controller-manager-no-preload-406541" is "Ready"
	I1018 12:18:30.608164  310517 pod_ready.go:86] duration metric: took 177.709701ms for pod "kube-controller-manager-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.807461  310517 pod_ready.go:83] waiting for pod "kube-proxy-9vbmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.806468  319485 start.go:305] selected driver: docker
	I1018 12:18:30.806488  319485 start.go:925] validating driver "docker" against &{Name:embed-certs-175371 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:30.806613  319485 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:18:30.807410  319485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:30.867893  319485 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:18:30.856888749 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:30.868200  319485 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:18:30.868236  319485 cni.go:84] Creating CNI manager for ""
	I1018 12:18:30.868281  319485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:18:30.868319  319485 start.go:349] cluster config:
	{Name:embed-certs-175371 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:30.870215  319485 out.go:179] * Starting "embed-certs-175371" primary control-plane node in "embed-certs-175371" cluster
	I1018 12:18:30.871831  319485 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:18:30.873306  319485 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:18:30.874877  319485 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:30.874928  319485 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 12:18:30.874944  319485 cache.go:58] Caching tarball of preloaded images
	I1018 12:18:30.875010  319485 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:18:30.875066  319485 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:18:30.875078  319485 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:18:30.875220  319485 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/config.json ...
	I1018 12:18:30.899840  319485 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:18:30.899862  319485 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:18:30.899879  319485 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:18:30.899905  319485 start.go:360] acquireMachinesLock for embed-certs-175371: {Name:mk656d4acd5501b1836b6cdb3453deba417e2657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:18:30.899958  319485 start.go:364] duration metric: took 36.728µs to acquireMachinesLock for "embed-certs-175371"
	I1018 12:18:30.899976  319485 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:18:30.899983  319485 fix.go:54] fixHost starting: 
	I1018 12:18:30.900188  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:30.918592  319485 fix.go:112] recreateIfNeeded on embed-certs-175371: state=Stopped err=<nil>
	W1018 12:18:30.918622  319485 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:18:31.208253  310517 pod_ready.go:94] pod "kube-proxy-9vbmr" is "Ready"
	I1018 12:18:31.208285  310517 pod_ready.go:86] duration metric: took 400.799145ms for pod "kube-proxy-9vbmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.407677  310517 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.806754  310517 pod_ready.go:94] pod "kube-scheduler-no-preload-406541" is "Ready"
	I1018 12:18:31.806818  310517 pod_ready.go:86] duration metric: took 399.114489ms for pod "kube-scheduler-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.806829  310517 pod_ready.go:40] duration metric: took 34.407726613s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:31.854283  310517 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:18:31.855987  310517 out.go:179] * Done! kubectl is now configured to use "no-preload-406541" cluster and "default" namespace by default
	W1018 12:18:29.376596  309793 pod_ready.go:104] pod "coredns-5dd5756b68-s4wnq" is not "Ready", error: <nil>
	I1018 12:18:30.875552  309793 pod_ready.go:94] pod "coredns-5dd5756b68-s4wnq" is "Ready"
	I1018 12:18:30.875577  309793 pod_ready.go:86] duration metric: took 36.005408914s for pod "coredns-5dd5756b68-s4wnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.878359  309793 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.883038  309793 pod_ready.go:94] pod "etcd-old-k8s-version-024443" is "Ready"
	I1018 12:18:30.883061  309793 pod_ready.go:86] duration metric: took 4.681016ms for pod "etcd-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.886183  309793 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.890240  309793 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-024443" is "Ready"
	I1018 12:18:30.890262  309793 pod_ready.go:86] duration metric: took 4.059352ms for pod "kube-apiserver-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.893534  309793 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.074647  309793 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-024443" is "Ready"
	I1018 12:18:31.074685  309793 pod_ready.go:86] duration metric: took 181.128894ms for pod "kube-controller-manager-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.274861  309793 pod_ready.go:83] waiting for pod "kube-proxy-tzlpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.674522  309793 pod_ready.go:94] pod "kube-proxy-tzlpd" is "Ready"
	I1018 12:18:31.674555  309793 pod_ready.go:86] duration metric: took 399.668633ms for pod "kube-proxy-tzlpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.874734  309793 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:32.274153  309793 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-024443" is "Ready"
	I1018 12:18:32.274178  309793 pod_ready.go:86] duration metric: took 399.401101ms for pod "kube-scheduler-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:32.274188  309793 pod_ready.go:40] duration metric: took 37.409550626s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:32.318706  309793 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1018 12:18:32.320699  309793 out.go:203] 
	W1018 12:18:32.322350  309793 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 12:18:32.323906  309793 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 12:18:32.325540  309793 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-024443" cluster and "default" namespace by default
	I1018 12:18:29.298582  317167 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 12:18:29.303739  317167 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:18:29.303786  317167 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:18:29.797387  317167 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 12:18:29.802331  317167 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1018 12:18:29.803460  317167 api_server.go:141] control plane version: v1.34.1
	I1018 12:18:29.803483  317167 api_server.go:131] duration metric: took 1.00630107s to wait for apiserver health ...
	I1018 12:18:29.803491  317167 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:18:29.807265  317167 system_pods.go:59] 8 kube-system pods found
	I1018 12:18:29.807303  317167 system_pods.go:61] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:18:29.807319  317167 system_pods.go:61] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:29.807327  317167 system_pods.go:61] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:18:29.807333  317167 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:29.807341  317167 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:29.807349  317167 system_pods.go:61] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:18:29.807368  317167 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:29.807380  317167 system_pods.go:61] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Running
	I1018 12:18:29.807389  317167 system_pods.go:74] duration metric: took 3.891153ms to wait for pod list to return data ...
	I1018 12:18:29.807401  317167 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:18:29.810242  317167 default_sa.go:45] found service account: "default"
	I1018 12:18:29.810296  317167 default_sa.go:55] duration metric: took 2.860617ms for default service account to be created ...
	I1018 12:18:29.810306  317167 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:18:29.813451  317167 system_pods.go:86] 8 kube-system pods found
	I1018 12:18:29.813483  317167 system_pods.go:89] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:18:29.813490  317167 system_pods.go:89] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:29.813495  317167 system_pods.go:89] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:18:29.813500  317167 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:29.813506  317167 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:29.813509  317167 system_pods.go:89] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:18:29.813514  317167 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:29.813520  317167 system_pods.go:89] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Running
	I1018 12:18:29.813527  317167 system_pods.go:126] duration metric: took 3.216525ms to wait for k8s-apps to be running ...
	I1018 12:18:29.813536  317167 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:18:29.813576  317167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:29.827054  317167 system_svc.go:56] duration metric: took 13.51026ms WaitForService to wait for kubelet
	I1018 12:18:29.827080  317167 kubeadm.go:586] duration metric: took 3.447871394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:18:29.827097  317167 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:18:29.830363  317167 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:18:29.830389  317167 node_conditions.go:123] node cpu capacity is 8
	I1018 12:18:29.830401  317167 node_conditions.go:105] duration metric: took 3.29887ms to run NodePressure ...
	I1018 12:18:29.830412  317167 start.go:241] waiting for startup goroutines ...
	I1018 12:18:29.830418  317167 start.go:246] waiting for cluster config update ...
	I1018 12:18:29.830429  317167 start.go:255] writing updated cluster config ...
	I1018 12:18:29.830727  317167 ssh_runner.go:195] Run: rm -f paused
	I1018 12:18:29.835232  317167 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:29.839676  317167 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7qgqj" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:18:31.844958  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:33.845498  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:30.921314  319485 out.go:252] * Restarting existing docker container for "embed-certs-175371" ...
	I1018 12:18:30.921390  319485 cli_runner.go:164] Run: docker start embed-certs-175371
	I1018 12:18:31.169483  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:31.188693  319485 kic.go:430] container "embed-certs-175371" state is running.
	I1018 12:18:31.189103  319485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-175371
	I1018 12:18:31.209362  319485 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/config.json ...
	I1018 12:18:31.209641  319485 machine.go:93] provisionDockerMachine start ...
	I1018 12:18:31.209725  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:31.229147  319485 main.go:141] libmachine: Using SSH client type: native
	I1018 12:18:31.229379  319485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 12:18:31.229390  319485 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:18:31.229993  319485 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36872->127.0.0.1:33123: read: connection reset by peer
	I1018 12:18:34.383983  319485 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-175371
	
	I1018 12:18:34.384015  319485 ubuntu.go:182] provisioning hostname "embed-certs-175371"
	I1018 12:18:34.384079  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:34.407484  319485 main.go:141] libmachine: Using SSH client type: native
	I1018 12:18:34.407828  319485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 12:18:34.407850  319485 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-175371 && echo "embed-certs-175371" | sudo tee /etc/hostname
	I1018 12:18:34.571542  319485 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-175371
	
	I1018 12:18:34.571633  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:34.593919  319485 main.go:141] libmachine: Using SSH client type: native
	I1018 12:18:34.594233  319485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 12:18:34.594268  319485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-175371' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-175371/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-175371' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:18:34.745131  319485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:18:34.745165  319485 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:18:34.745187  319485 ubuntu.go:190] setting up certificates
	I1018 12:18:34.745200  319485 provision.go:84] configureAuth start
	I1018 12:18:34.745288  319485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-175371
	I1018 12:18:34.769316  319485 provision.go:143] copyHostCerts
	I1018 12:18:34.769395  319485 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:18:34.769421  319485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:18:34.769499  319485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:18:34.769623  319485 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:18:34.769630  319485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:18:34.769673  319485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:18:34.769842  319485 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:18:34.769853  319485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:18:34.769895  319485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:18:34.769991  319485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.embed-certs-175371 san=[127.0.0.1 192.168.76.2 embed-certs-175371 localhost minikube]
	I1018 12:18:35.347148  319485 provision.go:177] copyRemoteCerts
	I1018 12:18:35.347208  319485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:18:35.347243  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:35.368711  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:35.475696  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:18:35.507103  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 12:18:35.533969  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 12:18:35.562565  319485 provision.go:87] duration metric: took 817.346845ms to configureAuth
	I1018 12:18:35.562597  319485 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:18:35.562839  319485 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:35.562989  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:35.590077  319485 main.go:141] libmachine: Using SSH client type: native
	I1018 12:18:35.590320  319485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 12:18:35.590341  319485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:18:36.705988  319485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:18:36.706031  319485 machine.go:96] duration metric: took 5.49637009s to provisionDockerMachine
	I1018 12:18:36.706047  319485 start.go:293] postStartSetup for "embed-certs-175371" (driver="docker")
	I1018 12:18:36.706060  319485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:18:36.706128  319485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:18:36.706190  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:36.727476  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:36.830826  319485 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:18:36.835533  319485 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:18:36.835569  319485 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:18:36.835584  319485 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:18:36.835636  319485 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:18:36.835707  319485 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:18:36.835829  319485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:18:36.846005  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:18:36.869811  319485 start.go:296] duration metric: took 163.746336ms for postStartSetup
	I1018 12:18:36.869902  319485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:18:36.869946  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:36.893357  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:36.997968  319485 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:18:37.004253  319485 fix.go:56] duration metric: took 6.104260841s for fixHost
	I1018 12:18:37.004285  319485 start.go:83] releasing machines lock for "embed-certs-175371", held for 6.104316695s
	I1018 12:18:37.004355  319485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-175371
	I1018 12:18:37.029349  319485 ssh_runner.go:195] Run: cat /version.json
	I1018 12:18:37.029412  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:37.029566  319485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:18:37.029633  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:37.054331  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:37.058158  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:37.158913  319485 ssh_runner.go:195] Run: systemctl --version
	I1018 12:18:37.235612  319485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:18:37.281675  319485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:18:37.287892  319485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:18:37.287969  319485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:18:37.298848  319485 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:18:37.298875  319485 start.go:495] detecting cgroup driver to use...
	I1018 12:18:37.298911  319485 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:18:37.298960  319485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:18:37.318507  319485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:18:37.335843  319485 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:18:37.335916  319485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:18:37.357159  319485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:18:37.373241  319485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:18:37.464197  319485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:18:37.557992  319485 docker.go:234] disabling docker service ...
	I1018 12:18:37.558064  319485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:18:37.573855  319485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:18:37.587606  319485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:18:37.677046  319485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:18:37.786485  319485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:18:37.800125  319485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:18:37.814639  319485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:18:37.814703  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.823696  319485 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:18:37.823802  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.833404  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.843440  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.852880  319485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:18:37.861252  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.870194  319485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.878686  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.887388  319485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:18:37.894731  319485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:18:37.902146  319485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:18:37.980625  319485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:18:38.435447  319485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:18:38.435521  319485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:18:38.439678  319485 start.go:563] Will wait 60s for crictl version
	I1018 12:18:38.439734  319485 ssh_runner.go:195] Run: which crictl
	I1018 12:18:38.443262  319485 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:18:38.467148  319485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:18:38.467213  319485 ssh_runner.go:195] Run: crio --version
	I1018 12:18:38.495216  319485 ssh_runner.go:195] Run: crio --version
	I1018 12:18:38.525571  319485 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1018 12:18:35.846564  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:38.345142  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:38.527068  319485 cli_runner.go:164] Run: docker network inspect embed-certs-175371 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:18:38.546516  319485 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 12:18:38.550993  319485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:18:38.561695  319485 kubeadm.go:883] updating cluster {Name:embed-certs-175371 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:18:38.561845  319485 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:38.561901  319485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:18:38.598535  319485 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:18:38.598563  319485 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:18:38.598618  319485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:18:38.630421  319485 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:18:38.630442  319485 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:18:38.630450  319485 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 12:18:38.630539  319485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-175371 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:18:38.630598  319485 ssh_runner.go:195] Run: crio config
	I1018 12:18:38.679497  319485 cni.go:84] Creating CNI manager for ""
	I1018 12:18:38.679521  319485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:18:38.679539  319485 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:18:38.679558  319485 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-175371 NodeName:embed-certs-175371 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:18:38.679684  319485 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-175371"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:18:38.679753  319485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:18:38.689079  319485 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:18:38.689144  319485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:18:38.697752  319485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 12:18:38.712315  319485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:18:38.726955  319485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 12:18:38.742413  319485 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:18:38.747169  319485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:18:38.758198  319485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:18:38.854804  319485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:18:38.876145  319485 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371 for IP: 192.168.76.2
	I1018 12:18:38.876167  319485 certs.go:195] generating shared ca certs ...
	I1018 12:18:38.876187  319485 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:38.876358  319485 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:18:38.876406  319485 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:18:38.876416  319485 certs.go:257] generating profile certs ...
	I1018 12:18:38.876507  319485 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/client.key
	I1018 12:18:38.876562  319485 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/apiserver.key.760612f0
	I1018 12:18:38.876613  319485 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/proxy-client.key
	I1018 12:18:38.876718  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem (1338 bytes)
	W1018 12:18:38.876744  319485 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360_empty.pem, impossibly tiny 0 bytes
	I1018 12:18:38.876751  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:18:38.876795  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:18:38.876824  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:18:38.876845  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 12:18:38.876882  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:18:38.877407  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:18:38.896628  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:18:38.916658  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:18:38.936639  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:18:38.960966  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 12:18:38.980170  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:18:38.997882  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:18:39.015725  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 12:18:39.032805  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /usr/share/ca-certificates/93602.pem (1708 bytes)
	I1018 12:18:39.049790  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:18:39.068080  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem --> /usr/share/ca-certificates/9360.pem (1338 bytes)
	I1018 12:18:39.086062  319485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:18:39.098810  319485 ssh_runner.go:195] Run: openssl version
	I1018 12:18:39.105009  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:18:39.113777  319485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:18:39.117712  319485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:18:39.117797  319485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:18:39.153127  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:18:39.162168  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9360.pem && ln -fs /usr/share/ca-certificates/9360.pem /etc/ssl/certs/9360.pem"
	I1018 12:18:39.171385  319485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9360.pem
	I1018 12:18:39.175469  319485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9360.pem
	I1018 12:18:39.175546  319485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9360.pem
	I1018 12:18:39.210362  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9360.pem /etc/ssl/certs/51391683.0"
	I1018 12:18:39.218971  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93602.pem && ln -fs /usr/share/ca-certificates/93602.pem /etc/ssl/certs/93602.pem"
	I1018 12:18:39.229154  319485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93602.pem
	I1018 12:18:39.233188  319485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/93602.pem
	I1018 12:18:39.233248  319485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93602.pem
	I1018 12:18:39.268526  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93602.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:18:39.276871  319485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:18:39.280846  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:18:39.315107  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:18:39.350704  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:18:39.387775  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:18:39.435187  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:18:39.475299  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:18:39.529584  319485 kubeadm.go:400] StartCluster: {Name:embed-certs-175371 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:39.529660  319485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:18:39.529707  319485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:18:39.572206  319485 cri.go:89] found id: "7eed71db702f71ba8ac1b3a4f95bf0e94d637c0237e59764412e0610aff6eddd"
	I1018 12:18:39.572238  319485 cri.go:89] found id: "8b43d4c98eba66467fa5b9aa2bd7f75a53d098d4dc11c9ca9578904769346b5e"
	I1018 12:18:39.572245  319485 cri.go:89] found id: "d82c539cae49915538e61bf60b7ade17e61db3edc660d10570b58552a6175d40"
	I1018 12:18:39.572250  319485 cri.go:89] found id: "a474582c739fed0fe5717b996a3fc2e3a1f0f913711f6e7f996ecc56104a314f"
	I1018 12:18:39.572255  319485 cri.go:89] found id: ""
	I1018 12:18:39.572310  319485 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 12:18:39.585733  319485 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:39Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:18:39.585815  319485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:18:39.594298  319485 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 12:18:39.594319  319485 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 12:18:39.594367  319485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 12:18:39.604664  319485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:18:39.605663  319485 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-175371" does not appear in /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:39.606304  319485 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-5865/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-175371" cluster setting kubeconfig missing "embed-certs-175371" context setting]
	I1018 12:18:39.607392  319485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:39.609392  319485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 12:18:39.617900  319485 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 12:18:39.617934  319485 kubeadm.go:601] duration metric: took 23.608426ms to restartPrimaryControlPlane
	I1018 12:18:39.617944  319485 kubeadm.go:402] duration metric: took 88.372405ms to StartCluster
	I1018 12:18:39.617961  319485 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:39.618027  319485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:39.620424  319485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:39.620686  319485 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:18:39.620787  319485 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:18:39.620892  319485 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-175371"
	I1018 12:18:39.620905  319485 addons.go:69] Setting dashboard=true in profile "embed-certs-175371"
	I1018 12:18:39.620954  319485 addons.go:238] Setting addon dashboard=true in "embed-certs-175371"
	W1018 12:18:39.620966  319485 addons.go:247] addon dashboard should already be in state true
	I1018 12:18:39.621000  319485 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:18:39.621038  319485 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:39.620915  319485 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-175371"
	W1018 12:18:39.621060  319485 addons.go:247] addon storage-provisioner should already be in state true
	I1018 12:18:39.621089  319485 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:18:39.620920  319485 addons.go:69] Setting default-storageclass=true in profile "embed-certs-175371"
	I1018 12:18:39.621185  319485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-175371"
	I1018 12:18:39.621523  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:39.621548  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:39.621562  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:39.623582  319485 out.go:179] * Verifying Kubernetes components...
	I1018 12:18:39.624890  319485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:18:39.647395  319485 addons.go:238] Setting addon default-storageclass=true in "embed-certs-175371"
	W1018 12:18:39.647416  319485 addons.go:247] addon default-storageclass should already be in state true
	I1018 12:18:39.647444  319485 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:18:39.647878  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:39.649378  319485 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:18:39.649377  319485 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 12:18:39.650859  319485 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:18:39.650877  319485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:18:39.650935  319485 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 12:18:39.650953  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:39.652294  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 12:18:39.652313  319485 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 12:18:39.652366  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:39.685481  319485 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:18:39.685508  319485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:18:39.685565  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:39.688909  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:39.691698  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:39.715793  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:39.776976  319485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:18:39.796702  319485 node_ready.go:35] waiting up to 6m0s for node "embed-certs-175371" to be "Ready" ...
	I1018 12:18:39.810215  319485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:18:39.810840  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 12:18:39.810861  319485 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 12:18:39.827587  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 12:18:39.827617  319485 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 12:18:39.832984  319485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:18:39.846934  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 12:18:39.846963  319485 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 12:18:39.866940  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 12:18:39.866963  319485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 12:18:39.884653  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 12:18:39.884676  319485 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 12:18:39.899737  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 12:18:39.899797  319485 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 12:18:39.914273  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 12:18:39.914304  319485 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 12:18:39.928891  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 12:18:39.928922  319485 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 12:18:39.941986  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:18:39.942011  319485 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 12:18:39.956234  319485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:18:41.376829  319485 node_ready.go:49] node "embed-certs-175371" is "Ready"
	I1018 12:18:41.376867  319485 node_ready.go:38] duration metric: took 1.579990475s for node "embed-certs-175371" to be "Ready" ...
	I1018 12:18:41.376885  319485 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:18:41.376941  319485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:18:41.913233  319485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.102983393s)
	I1018 12:18:41.913329  319485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.08031124s)
	I1018 12:18:41.913460  319485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.957177067s)
	I1018 12:18:41.913484  319485 api_server.go:72] duration metric: took 2.292768638s to wait for apiserver process to appear ...
	I1018 12:18:41.913497  319485 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:18:41.913526  319485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 12:18:41.918402  319485 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-175371 addons enable metrics-server
	
	I1018 12:18:41.919631  319485 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:18:41.919655  319485 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:18:41.925471  319485 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1018 12:18:40.346078  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:42.347310  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:41.927054  319485 addons.go:514] duration metric: took 2.306294485s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 12:18:42.413938  319485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 12:18:42.418439  319485 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:18:42.418474  319485 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:18:42.913848  319485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 12:18:42.918735  319485 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 12:18:42.919687  319485 api_server.go:141] control plane version: v1.34.1
	I1018 12:18:42.919718  319485 api_server.go:131] duration metric: took 1.006210574s to wait for apiserver health ...
	I1018 12:18:42.919726  319485 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:18:42.923301  319485 system_pods.go:59] 8 kube-system pods found
	I1018 12:18:42.923341  319485 system_pods.go:61] "coredns-66bc5c9577-b6h9l" [bf0c7f4f-476e-4faf-9159-580059735927] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:18:42.923353  319485 system_pods.go:61] "etcd-embed-certs-175371" [78ddf662-3465-4bf6-8514-500ccc419f56] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:42.923364  319485 system_pods.go:61] "kindnet-dxw8r" [c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:18:42.923373  319485 system_pods.go:61] "kube-apiserver-embed-certs-175371" [4357b213-beda-4ed7-b5b7-8a7ee35900fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:42.923383  319485 system_pods.go:61] "kube-controller-manager-embed-certs-175371" [5f063dc0-4c2c-434c-a534-54e2ca90614f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:42.923397  319485 system_pods.go:61] "kube-proxy-t2x4c" [9d5ade84-59a3-4948-ba28-a6663bd749ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:18:42.923409  319485 system_pods.go:61] "kube-scheduler-embed-certs-175371" [24ee0c7e-121d-42ff-ac1c-ce69f7cc6511] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:42.923448  319485 system_pods.go:61] "storage-provisioner" [d598f5a5-5d3e-4ad8-9266-ea4fee4648c7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:18:42.923466  319485 system_pods.go:74] duration metric: took 3.733114ms to wait for pod list to return data ...
	I1018 12:18:42.923476  319485 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:18:42.926029  319485 default_sa.go:45] found service account: "default"
	I1018 12:18:42.926061  319485 default_sa.go:55] duration metric: took 2.577664ms for default service account to be created ...
	I1018 12:18:42.926074  319485 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:18:42.929022  319485 system_pods.go:86] 8 kube-system pods found
	I1018 12:18:42.929049  319485 system_pods.go:89] "coredns-66bc5c9577-b6h9l" [bf0c7f4f-476e-4faf-9159-580059735927] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:18:42.929057  319485 system_pods.go:89] "etcd-embed-certs-175371" [78ddf662-3465-4bf6-8514-500ccc419f56] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:42.929063  319485 system_pods.go:89] "kindnet-dxw8r" [c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:18:42.929069  319485 system_pods.go:89] "kube-apiserver-embed-certs-175371" [4357b213-beda-4ed7-b5b7-8a7ee35900fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:42.929074  319485 system_pods.go:89] "kube-controller-manager-embed-certs-175371" [5f063dc0-4c2c-434c-a534-54e2ca90614f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:42.929079  319485 system_pods.go:89] "kube-proxy-t2x4c" [9d5ade84-59a3-4948-ba28-a6663bd749ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:18:42.929084  319485 system_pods.go:89] "kube-scheduler-embed-certs-175371" [24ee0c7e-121d-42ff-ac1c-ce69f7cc6511] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:42.929088  319485 system_pods.go:89] "storage-provisioner" [d598f5a5-5d3e-4ad8-9266-ea4fee4648c7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:18:42.929095  319485 system_pods.go:126] duration metric: took 3.016302ms to wait for k8s-apps to be running ...
	I1018 12:18:42.929105  319485 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:18:42.929153  319485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:42.942149  319485 system_svc.go:56] duration metric: took 13.033259ms WaitForService to wait for kubelet
	I1018 12:18:42.942182  319485 kubeadm.go:586] duration metric: took 3.321467327s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:18:42.942204  319485 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:18:42.944896  319485 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:18:42.944917  319485 node_conditions.go:123] node cpu capacity is 8
	I1018 12:18:42.944942  319485 node_conditions.go:105] duration metric: took 2.731777ms to run NodePressure ...
	I1018 12:18:42.944955  319485 start.go:241] waiting for startup goroutines ...
	I1018 12:18:42.944969  319485 start.go:246] waiting for cluster config update ...
	I1018 12:18:42.945001  319485 start.go:255] writing updated cluster config ...
	I1018 12:18:42.945268  319485 ssh_runner.go:195] Run: rm -f paused
	I1018 12:18:42.949454  319485 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:42.952932  319485 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b6h9l" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:18:44.959171  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 12:18:07 no-preload-406541 crio[559]: time="2025-10-18T12:18:07.358712239Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 12:18:07 no-preload-406541 crio[559]: time="2025-10-18T12:18:07.36397423Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 12:18:07 no-preload-406541 crio[559]: time="2025-10-18T12:18:07.364006855Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.53207242Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1956d3cd-a69b-4b95-a51d-6b6c48006c81 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.534749202Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4c577758-3061-4e9f-8a7b-36600decb5ef name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.53714424Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd/dashboard-metrics-scraper" id=954f2250-5f8f-46b1-bda0-edee95f398de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.539129126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.546073779Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.546523222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.569881828Z" level=info msg="Created container 2f228a114994354e92d8570f64381531a41496d20ad84389b5b4d0deb9fad3ec: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd/dashboard-metrics-scraper" id=954f2250-5f8f-46b1-bda0-edee95f398de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.570658838Z" level=info msg="Starting container: 2f228a114994354e92d8570f64381531a41496d20ad84389b5b4d0deb9fad3ec" id=5be04e4d-fb9b-4b0c-bffc-ddd25ae2de52 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.57276999Z" level=info msg="Started container" PID=1721 containerID=2f228a114994354e92d8570f64381531a41496d20ad84389b5b4d0deb9fad3ec description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd/dashboard-metrics-scraper id=5be04e4d-fb9b-4b0c-bffc-ddd25ae2de52 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3fd81679ea24313fceafc8d30b3cadcde2f77045a11cb34bd98a251f5b1dd9ab
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.637091448Z" level=info msg="Removing container: 40d8b49268b4f0034ac31674a0e02f3b940698ba2c663e566dd82c59132de030" id=88c22a34-453e-4630-a434-8fc2b950234c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.649441238Z" level=info msg="Removed container 40d8b49268b4f0034ac31674a0e02f3b940698ba2c663e566dd82c59132de030: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd/dashboard-metrics-scraper" id=88c22a34-453e-4630-a434-8fc2b950234c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.66749735Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cf8f7179-6d9b-4d1c-94e4-d855eac9d7ea name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.668475288Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=68057a0f-53ed-4d27-9d98-2f6d02d18abb name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.669508611Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=44a78e76-b11f-42c2-b3a4-c69cc3dfc3ad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.669825725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.674786763Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.674988539Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a088c686830c0cb6a2e001facf5dc5fc70db4b47a1bbd5f1a8cb13100c8ba1aa/merged/etc/passwd: no such file or directory"
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.675167707Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a088c686830c0cb6a2e001facf5dc5fc70db4b47a1bbd5f1a8cb13100c8ba1aa/merged/etc/group: no such file or directory"
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.675500543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.704726386Z" level=info msg="Created container 62d512662ad1ee0b6a671a7817864180d3148e6813aaeaa115a934796a423076: kube-system/storage-provisioner/storage-provisioner" id=44a78e76-b11f-42c2-b3a4-c69cc3dfc3ad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.705435219Z" level=info msg="Starting container: 62d512662ad1ee0b6a671a7817864180d3148e6813aaeaa115a934796a423076" id=5c5b03b4-b46d-4e8b-af7f-161ca2137ea2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.707369246Z" level=info msg="Started container" PID=1735 containerID=62d512662ad1ee0b6a671a7817864180d3148e6813aaeaa115a934796a423076 description=kube-system/storage-provisioner/storage-provisioner id=5c5b03b4-b46d-4e8b-af7f-161ca2137ea2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=077f82c17428529e98ecd94f00ba0ade8eb40352ad1722a71e470aebfe5b3482
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	62d512662ad1e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   077f82c174285       storage-provisioner                          kube-system
	2f228a1149943       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago      Exited              dashboard-metrics-scraper   2                   3fd81679ea243       dashboard-metrics-scraper-6ffb444bf9-q6bfd   kubernetes-dashboard
	d8afd7c12527a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   60739b9f5674a       kubernetes-dashboard-855c9754f9-v6qwc        kubernetes-dashboard
	bf4962a6a3ad2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   6e80cd756af60       coredns-66bc5c9577-bwvrq                     kube-system
	7343005218c69       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   f418e4a9de4e1       busybox                                      default
	40786b0420f7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   077f82c174285       storage-provisioner                          kube-system
	9b0a2248d2179       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   cc78454a95463       kube-proxy-9vbmr                             kube-system
	eeb9a7b0a2689       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   a6a81b438806d       kindnet-dwg7c                                kube-system
	5d618e751f9ba       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   bb80e4919842a       kube-controller-manager-no-preload-406541    kube-system
	133fd0664569c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   65379f445ed6e       kube-apiserver-no-preload-406541             kube-system
	37d2f600fcf0c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   c4161cb2bfae2       etcd-no-preload-406541                       kube-system
	786f9a8bc0ec9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   4f3e6836f52b4       kube-scheduler-no-preload-406541             kube-system
	
	
	==> coredns [bf4962a6a3ad256176dfa5ae96b9a87a6ed571246e8433b9f043ab17f752c961] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45175 - 55704 "HINFO IN 3551838433391856392.3047988239489226815. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.431724226s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-406541
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-406541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=no-preload-406541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_16_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:16:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-406541
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:18:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:18:26 +0000   Sat, 18 Oct 2025 12:16:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:18:26 +0000   Sat, 18 Oct 2025 12:16:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:18:26 +0000   Sat, 18 Oct 2025 12:16:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:18:26 +0000   Sat, 18 Oct 2025 12:17:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-406541
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                3289e84c-c9b3-408a-9f62-dbb3085e7d17
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-bwvrq                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-no-preload-406541                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-dwg7c                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-no-preload-406541              250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-no-preload-406541     200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-9vbmr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-no-preload-406541              100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-q6bfd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-v6qwc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node no-preload-406541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node no-preload-406541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x8 over 115s)  kubelet          Node no-preload-406541 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     109s                 kubelet          Node no-preload-406541 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  109s                 kubelet          Node no-preload-406541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s                 kubelet          Node no-preload-406541 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node no-preload-406541 event: Registered Node no-preload-406541 in Controller
	  Normal  NodeReady                91s                  kubelet          Node no-preload-406541 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node no-preload-406541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node no-preload-406541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node no-preload-406541 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                  node-controller  Node no-preload-406541 event: Registered Node no-preload-406541 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +11.948953] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 93 07 de 40 6d 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 2f a5 3a 37 fc 08 06
	[  +0.204454] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 4b 47 1f ce e5 08 06
	[Oct18 12:16] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 88 62 1b dd a7 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 f1 aa 42 b3 1d 08 06
	[  +0.000901] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +26.035563] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 9e 15 3f 0e e1 08 06
	[  +0.000631] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 55 46 ae a1 7f 08 06
	[  +2.492998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 63 10 7e 7b f1 08 06
	[  +0.001695] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	[ +18.118461] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e eb 77 72 c6 18 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	
	
	==> etcd [37d2f600fcf0c009e16115908271757cab49845434c4b2db0ade3132da9aaff7] <==
	{"level":"warn","ts":"2025-10-18T12:17:55.219703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.228681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.236569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.243438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.250504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.257868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.265089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.272619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.278977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.285454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.292087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.299242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.306992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.313615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.320879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.328033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.335802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.343238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.351344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.358091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.371012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.375238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.382430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.389897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.438223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34104","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:18:47 up  1:01,  0 user,  load average: 3.75, 4.04, 2.60
	Linux no-preload-406541 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eeb9a7b0a2689ceb5e5446d2d318c44949119ed381f76cb943c969ada5e7480d] <==
	I1018 12:17:57.080243       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:17:57.139636       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 12:17:57.139884       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:17:57.139907       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:17:57.139931       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:17:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:17:57.343731       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:17:57.344385       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:17:57.344427       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:17:57.344538       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:17:57.645288       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:17:57.645317       1 metrics.go:72] Registering metrics
	I1018 12:17:57.645414       1 controller.go:711] "Syncing nftables rules"
	I1018 12:18:07.343849       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 12:18:07.343932       1 main.go:301] handling current node
	I1018 12:18:17.349839       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 12:18:17.349877       1 main.go:301] handling current node
	I1018 12:18:27.344211       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 12:18:27.344246       1 main.go:301] handling current node
	I1018 12:18:37.349849       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 12:18:37.349891       1 main.go:301] handling current node
	I1018 12:18:47.352954       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 12:18:47.353006       1 main.go:301] handling current node
	
	
	==> kube-apiserver [133fd0664569cae2a09912a39da9ebed72def40b96fa66996c7f6cbd105deba3] <==
	I1018 12:17:55.898403       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 12:17:55.898417       1 policy_source.go:240] refreshing policies
	I1018 12:17:55.898493       1 aggregator.go:171] initial CRD sync complete...
	I1018 12:17:55.898501       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 12:17:55.898507       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 12:17:55.898513       1 cache.go:39] Caches are synced for autoregister controller
	I1018 12:17:55.898541       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 12:17:55.898680       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 12:17:55.898719       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 12:17:55.898714       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 12:17:55.907349       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 12:17:55.908799       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 12:17:55.919518       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:17:55.922140       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 12:17:56.154775       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:17:56.184152       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:17:56.208208       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:17:56.215214       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:17:56.223273       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:17:56.255684       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.39.19"}
	I1018 12:17:56.266307       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.67.249"}
	I1018 12:17:56.802301       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:17:59.642296       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:17:59.692357       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:17:59.791610       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5d618e751f9ba92d0e9b73cc902c60091fa7fc312b17c0a534306ddf5267331e] <==
	I1018 12:17:59.199598       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 12:17:59.211022       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 12:17:59.213295       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 12:17:59.237619       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 12:17:59.237648       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 12:17:59.237627       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 12:17:59.237803       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 12:17:59.237839       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 12:17:59.239088       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 12:17:59.239132       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 12:17:59.239148       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:17:59.239186       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 12:17:59.239198       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 12:17:59.239302       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 12:17:59.239205       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 12:17:59.245457       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 12:17:59.246660       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:17:59.247803       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 12:17:59.251063       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 12:17:59.255383       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 12:17:59.272583       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:17:59.280966       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:17:59.280991       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:17:59.281006       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:17:59.281218       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9b0a2248d2179ef0842e69ec0fb3d1c0118e01bfa03af00785477b05bbf28109] <==
	I1018 12:17:56.930009       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:17:56.983092       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:17:57.083986       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:17:57.084013       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 12:17:57.084110       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:17:57.103278       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:17:57.103344       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:17:57.108775       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:17:57.109181       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:17:57.109199       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:17:57.110639       1 config.go:200] "Starting service config controller"
	I1018 12:17:57.110660       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:17:57.110817       1 config.go:309] "Starting node config controller"
	I1018 12:17:57.110837       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:17:57.110893       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:17:57.110908       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:17:57.110941       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:17:57.110946       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:17:57.210827       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:17:57.211910       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:17:57.211925       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:17:57.211964       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [786f9a8bc0ec93e60a032d4b983f3c3c2cd05a95a06cfa33a7e7a81ed64a5f13] <==
	I1018 12:17:54.495951       1 serving.go:386] Generated self-signed cert in-memory
	W1018 12:17:55.832513       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 12:17:55.832679       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 12:17:55.832739       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 12:17:55.832968       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 12:17:55.866687       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 12:17:55.866720       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:17:55.869481       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:17:55.869528       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:17:55.869824       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:17:55.869912       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 12:17:55.970627       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:17:59 no-preload-406541 kubelet[699]: I1018 12:17:59.849387     699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wp88\" (UniqueName: \"kubernetes.io/projected/8332edef-a3c6-4f80-a2dd-eacb94b7a43b-kube-api-access-2wp88\") pod \"dashboard-metrics-scraper-6ffb444bf9-q6bfd\" (UID: \"8332edef-a3c6-4f80-a2dd-eacb94b7a43b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd"
	Oct 18 12:18:00 no-preload-406541 kubelet[699]: I1018 12:18:00.294566     699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 12:18:02 no-preload-406541 kubelet[699]: I1018 12:18:02.595693     699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd" podStartSLOduration=1.184150265 podStartE2EDuration="3.595668036s" podCreationTimestamp="2025-10-18 12:17:59 +0000 UTC" firstStartedPulling="2025-10-18 12:18:00.09795434 +0000 UTC m=+6.677626813" lastFinishedPulling="2025-10-18 12:18:02.509472038 +0000 UTC m=+9.089144584" observedRunningTime="2025-10-18 12:18:02.595478007 +0000 UTC m=+9.175150486" watchObservedRunningTime="2025-10-18 12:18:02.595668036 +0000 UTC m=+9.175340515"
	Oct 18 12:18:03 no-preload-406541 kubelet[699]: I1018 12:18:03.588061     699 scope.go:117] "RemoveContainer" containerID="c289f37a70c40c4cd56f631f49a6bf157b473ceafeba46a5e311ef1bd7f41d5a"
	Oct 18 12:18:04 no-preload-406541 kubelet[699]: I1018 12:18:04.592851     699 scope.go:117] "RemoveContainer" containerID="c289f37a70c40c4cd56f631f49a6bf157b473ceafeba46a5e311ef1bd7f41d5a"
	Oct 18 12:18:04 no-preload-406541 kubelet[699]: I1018 12:18:04.593003     699 scope.go:117] "RemoveContainer" containerID="40d8b49268b4f0034ac31674a0e02f3b940698ba2c663e566dd82c59132de030"
	Oct 18 12:18:04 no-preload-406541 kubelet[699]: E1018 12:18:04.593217     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6bfd_kubernetes-dashboard(8332edef-a3c6-4f80-a2dd-eacb94b7a43b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd" podUID="8332edef-a3c6-4f80-a2dd-eacb94b7a43b"
	Oct 18 12:18:05 no-preload-406541 kubelet[699]: I1018 12:18:05.594704     699 scope.go:117] "RemoveContainer" containerID="40d8b49268b4f0034ac31674a0e02f3b940698ba2c663e566dd82c59132de030"
	Oct 18 12:18:05 no-preload-406541 kubelet[699]: E1018 12:18:05.594928     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6bfd_kubernetes-dashboard(8332edef-a3c6-4f80-a2dd-eacb94b7a43b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd" podUID="8332edef-a3c6-4f80-a2dd-eacb94b7a43b"
	Oct 18 12:18:06 no-preload-406541 kubelet[699]: I1018 12:18:06.602341     699 scope.go:117] "RemoveContainer" containerID="40d8b49268b4f0034ac31674a0e02f3b940698ba2c663e566dd82c59132de030"
	Oct 18 12:18:06 no-preload-406541 kubelet[699]: E1018 12:18:06.603248     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6bfd_kubernetes-dashboard(8332edef-a3c6-4f80-a2dd-eacb94b7a43b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd" podUID="8332edef-a3c6-4f80-a2dd-eacb94b7a43b"
	Oct 18 12:18:06 no-preload-406541 kubelet[699]: I1018 12:18:06.958078     699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v6qwc" podStartSLOduration=2.335113866 podStartE2EDuration="7.958051103s" podCreationTimestamp="2025-10-18 12:17:59 +0000 UTC" firstStartedPulling="2025-10-18 12:18:00.098435935 +0000 UTC m=+6.678108412" lastFinishedPulling="2025-10-18 12:18:05.721373177 +0000 UTC m=+12.301045649" observedRunningTime="2025-10-18 12:18:06.619475972 +0000 UTC m=+13.199148451" watchObservedRunningTime="2025-10-18 12:18:06.958051103 +0000 UTC m=+13.537723596"
	Oct 18 12:18:17 no-preload-406541 kubelet[699]: I1018 12:18:17.531588     699 scope.go:117] "RemoveContainer" containerID="40d8b49268b4f0034ac31674a0e02f3b940698ba2c663e566dd82c59132de030"
	Oct 18 12:18:17 no-preload-406541 kubelet[699]: I1018 12:18:17.635799     699 scope.go:117] "RemoveContainer" containerID="40d8b49268b4f0034ac31674a0e02f3b940698ba2c663e566dd82c59132de030"
	Oct 18 12:18:17 no-preload-406541 kubelet[699]: I1018 12:18:17.636001     699 scope.go:117] "RemoveContainer" containerID="2f228a114994354e92d8570f64381531a41496d20ad84389b5b4d0deb9fad3ec"
	Oct 18 12:18:17 no-preload-406541 kubelet[699]: E1018 12:18:17.636270     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6bfd_kubernetes-dashboard(8332edef-a3c6-4f80-a2dd-eacb94b7a43b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd" podUID="8332edef-a3c6-4f80-a2dd-eacb94b7a43b"
	Oct 18 12:18:26 no-preload-406541 kubelet[699]: I1018 12:18:26.143446     699 scope.go:117] "RemoveContainer" containerID="2f228a114994354e92d8570f64381531a41496d20ad84389b5b4d0deb9fad3ec"
	Oct 18 12:18:26 no-preload-406541 kubelet[699]: E1018 12:18:26.143669     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6bfd_kubernetes-dashboard(8332edef-a3c6-4f80-a2dd-eacb94b7a43b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd" podUID="8332edef-a3c6-4f80-a2dd-eacb94b7a43b"
	Oct 18 12:18:27 no-preload-406541 kubelet[699]: I1018 12:18:27.667029     699 scope.go:117] "RemoveContainer" containerID="40786b0420f7a144665a1f103ad3f606cd6cabf7bf47ebe88741837fb573232b"
	Oct 18 12:18:37 no-preload-406541 kubelet[699]: I1018 12:18:37.531542     699 scope.go:117] "RemoveContainer" containerID="2f228a114994354e92d8570f64381531a41496d20ad84389b5b4d0deb9fad3ec"
	Oct 18 12:18:37 no-preload-406541 kubelet[699]: E1018 12:18:37.531819     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6bfd_kubernetes-dashboard(8332edef-a3c6-4f80-a2dd-eacb94b7a43b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd" podUID="8332edef-a3c6-4f80-a2dd-eacb94b7a43b"
	Oct 18 12:18:43 no-preload-406541 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 12:18:43 no-preload-406541 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 12:18:43 no-preload-406541 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 12:18:43 no-preload-406541 systemd[1]: kubelet.service: Consumed 1.714s CPU time.
	
	
	==> kubernetes-dashboard [d8afd7c12527a3cd1abb0b05cf7514d555f1c3d34293776ee0abc22dfa7847ed] <==
	2025/10/18 12:18:05 Starting overwatch
	2025/10/18 12:18:05 Using namespace: kubernetes-dashboard
	2025/10/18 12:18:05 Using in-cluster config to connect to apiserver
	2025/10/18 12:18:05 Using secret token for csrf signing
	2025/10/18 12:18:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 12:18:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 12:18:05 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 12:18:05 Generating JWE encryption key
	2025/10/18 12:18:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 12:18:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 12:18:05 Initializing JWE encryption key from synchronized object
	2025/10/18 12:18:05 Creating in-cluster Sidecar client
	2025/10/18 12:18:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 12:18:05 Serving insecurely on HTTP port: 9090
	2025/10/18 12:18:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [40786b0420f7a144665a1f103ad3f606cd6cabf7bf47ebe88741837fb573232b] <==
	I1018 12:17:56.896574       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 12:18:26.900125       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [62d512662ad1ee0b6a671a7817864180d3148e6813aaeaa115a934796a423076] <==
	I1018 12:18:27.726361       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 12:18:27.735271       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 12:18:27.735322       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 12:18:27.737967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:31.193613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:35.454668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:39.053245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:42.106616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:45.129826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:45.134922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:18:45.135088       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 12:18:45.135234       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-406541_df1d8eaf-12f1-41c4-b2dd-ddeb45a44384!
	I1018 12:18:45.135273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bf0d3988-5bf7-437b-a187-0fa2d27fb75f", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-406541_df1d8eaf-12f1-41c4-b2dd-ddeb45a44384 became leader
	W1018 12:18:45.138952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:45.143956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:18:45.235730       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-406541_df1d8eaf-12f1-41c4-b2dd-ddeb45a44384!
	W1018 12:18:47.148318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:47.153594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-406541 -n no-preload-406541
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-406541 -n no-preload-406541: exit status 2 (391.473359ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-406541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-406541
helpers_test.go:243: (dbg) docker inspect no-preload-406541:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193",
	        "Created": "2025-10-18T12:16:27.38049252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 310719,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:17:46.056629542Z",
	            "FinishedAt": "2025-10-18T12:17:45.214384513Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193/hostname",
	        "HostsPath": "/var/lib/docker/containers/3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193/hosts",
	        "LogPath": "/var/lib/docker/containers/3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193/3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193-json.log",
	        "Name": "/no-preload-406541",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-406541:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-406541",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3111cdfbd44a4ec5eed421693c13e289c9773d92e605e25d73a87d987a6e7193",
	                "LowerDir": "/var/lib/docker/overlay2/452b7a0353cc5fb49e7b2dc67c3eec0928606c730e569bf04fd69beda34a8483-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/452b7a0353cc5fb49e7b2dc67c3eec0928606c730e569bf04fd69beda34a8483/merged",
	                "UpperDir": "/var/lib/docker/overlay2/452b7a0353cc5fb49e7b2dc67c3eec0928606c730e569bf04fd69beda34a8483/diff",
	                "WorkDir": "/var/lib/docker/overlay2/452b7a0353cc5fb49e7b2dc67c3eec0928606c730e569bf04fd69beda34a8483/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-406541",
	                "Source": "/var/lib/docker/volumes/no-preload-406541/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-406541",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-406541",
	                "name.minikube.sigs.k8s.io": "no-preload-406541",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8544c1ba9b3b88dba7e7ac1dcca0a0c80468b3a84acde8b893cacbc7caaa8fc1",
	            "SandboxKey": "/var/run/docker/netns/8544c1ba9b3b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-406541": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:25:96:e9:d1:85",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dc7610ce545693ef1e28eeee1b4922dd1bc5e4eb71b054fa064c5359b8ecf50a",
	                    "EndpointID": "7befa15c15e950ac9859cbb42744c22233d614b6a32baae23b901de5aa3e1a8f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-406541",
	                        "3111cdfbd44a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-406541 -n no-preload-406541
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-406541 -n no-preload-406541: exit status 2 (387.627553ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-406541 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-406541 logs -n 25: (1.469198173s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-376567 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo crio config                                                                                                                                                                                                             │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ delete  │ -p bridge-376567                                                                                                                                                                                                                              │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ delete  │ -p disable-driver-mounts-200198                                                                                                                                                                                                               │ disable-driver-mounts-200198 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-024443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p old-k8s-version-024443 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-406541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p no-preload-406541 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-024443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p old-k8s-version-024443 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable dashboard -p no-preload-406541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p no-preload-406541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-028309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-028309 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-175371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ stop    │ -p embed-certs-175371 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-028309 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-175371 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p embed-certs-175371 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ image   │ no-preload-406541 image list --format=json                                                                                                                                                                                                    │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p no-preload-406541 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ image   │ old-k8s-version-024443 image list --format=json                                                                                                                                                                                               │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p old-k8s-version-024443 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:18:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:18:30.700052  319485 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:18:30.700328  319485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:30.700338  319485 out.go:374] Setting ErrFile to fd 2...
	I1018 12:18:30.700342  319485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:30.700573  319485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:18:30.701112  319485 out.go:368] Setting JSON to false
	I1018 12:18:30.702451  319485 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3659,"bootTime":1760786252,"procs":428,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:18:30.702547  319485 start.go:141] virtualization: kvm guest
	I1018 12:18:30.704614  319485 out.go:179] * [embed-certs-175371] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:18:30.706016  319485 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:18:30.706038  319485 notify.go:220] Checking for updates...
	I1018 12:18:30.708920  319485 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:18:30.710890  319485 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:30.712258  319485 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:18:30.713409  319485 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:18:30.714965  319485 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:18:30.716835  319485 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:30.717456  319485 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:18:30.741640  319485 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:18:30.741748  319485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:30.802733  319485 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:18:30.790905861 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:30.802866  319485 docker.go:318] overlay module found
	I1018 12:18:30.805106  319485 out.go:179] * Using the docker driver based on existing profile
	W1018 12:18:26.415356  310517 pod_ready.go:104] pod "coredns-66bc5c9577-bwvrq" is not "Ready", error: <nil>
	W1018 12:18:28.908743  310517 pod_ready.go:104] pod "coredns-66bc5c9577-bwvrq" is not "Ready", error: <nil>
	I1018 12:18:30.410244  310517 pod_ready.go:94] pod "coredns-66bc5c9577-bwvrq" is "Ready"
	I1018 12:18:30.410272  310517 pod_ready.go:86] duration metric: took 33.006670577s for pod "coredns-66bc5c9577-bwvrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.413489  310517 pod_ready.go:83] waiting for pod "etcd-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.418087  310517 pod_ready.go:94] pod "etcd-no-preload-406541" is "Ready"
	I1018 12:18:30.418113  310517 pod_ready.go:86] duration metric: took 4.60176ms for pod "etcd-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.420752  310517 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.425914  310517 pod_ready.go:94] pod "kube-apiserver-no-preload-406541" is "Ready"
	I1018 12:18:30.425945  310517 pod_ready.go:86] duration metric: took 5.137183ms for pod "kube-apiserver-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.430423  310517 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.608129  310517 pod_ready.go:94] pod "kube-controller-manager-no-preload-406541" is "Ready"
	I1018 12:18:30.608164  310517 pod_ready.go:86] duration metric: took 177.709701ms for pod "kube-controller-manager-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.807461  310517 pod_ready.go:83] waiting for pod "kube-proxy-9vbmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.806468  319485 start.go:305] selected driver: docker
	I1018 12:18:30.806488  319485 start.go:925] validating driver "docker" against &{Name:embed-certs-175371 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:30.806613  319485 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:18:30.807410  319485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:30.867893  319485 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:18:30.856888749 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:30.868200  319485 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:18:30.868236  319485 cni.go:84] Creating CNI manager for ""
	I1018 12:18:30.868281  319485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:18:30.868319  319485 start.go:349] cluster config:
	{Name:embed-certs-175371 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:30.870215  319485 out.go:179] * Starting "embed-certs-175371" primary control-plane node in "embed-certs-175371" cluster
	I1018 12:18:30.871831  319485 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:18:30.873306  319485 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:18:30.874877  319485 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:30.874928  319485 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 12:18:30.874944  319485 cache.go:58] Caching tarball of preloaded images
	I1018 12:18:30.875010  319485 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:18:30.875066  319485 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:18:30.875078  319485 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:18:30.875220  319485 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/config.json ...
	I1018 12:18:30.899840  319485 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:18:30.899862  319485 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:18:30.899879  319485 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:18:30.899905  319485 start.go:360] acquireMachinesLock for embed-certs-175371: {Name:mk656d4acd5501b1836b6cdb3453deba417e2657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:18:30.899958  319485 start.go:364] duration metric: took 36.728µs to acquireMachinesLock for "embed-certs-175371"
	I1018 12:18:30.899976  319485 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:18:30.899983  319485 fix.go:54] fixHost starting: 
	I1018 12:18:30.900188  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:30.918592  319485 fix.go:112] recreateIfNeeded on embed-certs-175371: state=Stopped err=<nil>
	W1018 12:18:30.918622  319485 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:18:31.208253  310517 pod_ready.go:94] pod "kube-proxy-9vbmr" is "Ready"
	I1018 12:18:31.208285  310517 pod_ready.go:86] duration metric: took 400.799145ms for pod "kube-proxy-9vbmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.407677  310517 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.806754  310517 pod_ready.go:94] pod "kube-scheduler-no-preload-406541" is "Ready"
	I1018 12:18:31.806818  310517 pod_ready.go:86] duration metric: took 399.114489ms for pod "kube-scheduler-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.806829  310517 pod_ready.go:40] duration metric: took 34.407726613s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:31.854283  310517 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:18:31.855987  310517 out.go:179] * Done! kubectl is now configured to use "no-preload-406541" cluster and "default" namespace by default
	W1018 12:18:29.376596  309793 pod_ready.go:104] pod "coredns-5dd5756b68-s4wnq" is not "Ready", error: <nil>
	I1018 12:18:30.875552  309793 pod_ready.go:94] pod "coredns-5dd5756b68-s4wnq" is "Ready"
	I1018 12:18:30.875577  309793 pod_ready.go:86] duration metric: took 36.005408914s for pod "coredns-5dd5756b68-s4wnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.878359  309793 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.883038  309793 pod_ready.go:94] pod "etcd-old-k8s-version-024443" is "Ready"
	I1018 12:18:30.883061  309793 pod_ready.go:86] duration metric: took 4.681016ms for pod "etcd-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.886183  309793 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.890240  309793 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-024443" is "Ready"
	I1018 12:18:30.890262  309793 pod_ready.go:86] duration metric: took 4.059352ms for pod "kube-apiserver-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.893534  309793 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.074647  309793 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-024443" is "Ready"
	I1018 12:18:31.074685  309793 pod_ready.go:86] duration metric: took 181.128894ms for pod "kube-controller-manager-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.274861  309793 pod_ready.go:83] waiting for pod "kube-proxy-tzlpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.674522  309793 pod_ready.go:94] pod "kube-proxy-tzlpd" is "Ready"
	I1018 12:18:31.674555  309793 pod_ready.go:86] duration metric: took 399.668633ms for pod "kube-proxy-tzlpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.874734  309793 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:32.274153  309793 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-024443" is "Ready"
	I1018 12:18:32.274178  309793 pod_ready.go:86] duration metric: took 399.401101ms for pod "kube-scheduler-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:32.274188  309793 pod_ready.go:40] duration metric: took 37.409550626s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:32.318706  309793 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1018 12:18:32.320699  309793 out.go:203] 
	W1018 12:18:32.322350  309793 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 12:18:32.323906  309793 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 12:18:32.325540  309793 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-024443" cluster and "default" namespace by default
	I1018 12:18:29.298582  317167 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 12:18:29.303739  317167 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:18:29.303786  317167 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:18:29.797387  317167 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 12:18:29.802331  317167 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1018 12:18:29.803460  317167 api_server.go:141] control plane version: v1.34.1
	I1018 12:18:29.803483  317167 api_server.go:131] duration metric: took 1.00630107s to wait for apiserver health ...
	I1018 12:18:29.803491  317167 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:18:29.807265  317167 system_pods.go:59] 8 kube-system pods found
	I1018 12:18:29.807303  317167 system_pods.go:61] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:18:29.807319  317167 system_pods.go:61] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:29.807327  317167 system_pods.go:61] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:18:29.807333  317167 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:29.807341  317167 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:29.807349  317167 system_pods.go:61] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:18:29.807368  317167 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:29.807380  317167 system_pods.go:61] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Running
	I1018 12:18:29.807389  317167 system_pods.go:74] duration metric: took 3.891153ms to wait for pod list to return data ...
	I1018 12:18:29.807401  317167 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:18:29.810242  317167 default_sa.go:45] found service account: "default"
	I1018 12:18:29.810296  317167 default_sa.go:55] duration metric: took 2.860617ms for default service account to be created ...
	I1018 12:18:29.810306  317167 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:18:29.813451  317167 system_pods.go:86] 8 kube-system pods found
	I1018 12:18:29.813483  317167 system_pods.go:89] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:18:29.813490  317167 system_pods.go:89] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:29.813495  317167 system_pods.go:89] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:18:29.813500  317167 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:29.813506  317167 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:29.813509  317167 system_pods.go:89] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:18:29.813514  317167 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:29.813520  317167 system_pods.go:89] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Running
	I1018 12:18:29.813527  317167 system_pods.go:126] duration metric: took 3.216525ms to wait for k8s-apps to be running ...
	I1018 12:18:29.813536  317167 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:18:29.813576  317167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:29.827054  317167 system_svc.go:56] duration metric: took 13.51026ms WaitForService to wait for kubelet
	I1018 12:18:29.827080  317167 kubeadm.go:586] duration metric: took 3.447871394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:18:29.827097  317167 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:18:29.830363  317167 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:18:29.830389  317167 node_conditions.go:123] node cpu capacity is 8
	I1018 12:18:29.830401  317167 node_conditions.go:105] duration metric: took 3.29887ms to run NodePressure ...
	I1018 12:18:29.830412  317167 start.go:241] waiting for startup goroutines ...
	I1018 12:18:29.830418  317167 start.go:246] waiting for cluster config update ...
	I1018 12:18:29.830429  317167 start.go:255] writing updated cluster config ...
	I1018 12:18:29.830727  317167 ssh_runner.go:195] Run: rm -f paused
	I1018 12:18:29.835232  317167 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:29.839676  317167 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7qgqj" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:18:31.844958  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:33.845498  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:30.921314  319485 out.go:252] * Restarting existing docker container for "embed-certs-175371" ...
	I1018 12:18:30.921390  319485 cli_runner.go:164] Run: docker start embed-certs-175371
	I1018 12:18:31.169483  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:31.188693  319485 kic.go:430] container "embed-certs-175371" state is running.
	I1018 12:18:31.189103  319485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-175371
	I1018 12:18:31.209362  319485 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/config.json ...
	I1018 12:18:31.209641  319485 machine.go:93] provisionDockerMachine start ...
	I1018 12:18:31.209725  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:31.229147  319485 main.go:141] libmachine: Using SSH client type: native
	I1018 12:18:31.229379  319485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 12:18:31.229390  319485 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:18:31.229993  319485 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36872->127.0.0.1:33123: read: connection reset by peer
	I1018 12:18:34.383983  319485 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-175371
	
	I1018 12:18:34.384015  319485 ubuntu.go:182] provisioning hostname "embed-certs-175371"
	I1018 12:18:34.384079  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:34.407484  319485 main.go:141] libmachine: Using SSH client type: native
	I1018 12:18:34.407828  319485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 12:18:34.407850  319485 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-175371 && echo "embed-certs-175371" | sudo tee /etc/hostname
	I1018 12:18:34.571542  319485 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-175371
	
	I1018 12:18:34.571633  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:34.593919  319485 main.go:141] libmachine: Using SSH client type: native
	I1018 12:18:34.594233  319485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 12:18:34.594268  319485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-175371' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-175371/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-175371' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:18:34.745131  319485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:18:34.745165  319485 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:18:34.745187  319485 ubuntu.go:190] setting up certificates
	I1018 12:18:34.745200  319485 provision.go:84] configureAuth start
	I1018 12:18:34.745288  319485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-175371
	I1018 12:18:34.769316  319485 provision.go:143] copyHostCerts
	I1018 12:18:34.769395  319485 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:18:34.769421  319485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:18:34.769499  319485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:18:34.769623  319485 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:18:34.769630  319485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:18:34.769673  319485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:18:34.769842  319485 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:18:34.769853  319485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:18:34.769895  319485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:18:34.769991  319485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.embed-certs-175371 san=[127.0.0.1 192.168.76.2 embed-certs-175371 localhost minikube]
	I1018 12:18:35.347148  319485 provision.go:177] copyRemoteCerts
	I1018 12:18:35.347208  319485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:18:35.347243  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:35.368711  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:35.475696  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:18:35.507103  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 12:18:35.533969  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 12:18:35.562565  319485 provision.go:87] duration metric: took 817.346845ms to configureAuth
	I1018 12:18:35.562597  319485 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:18:35.562839  319485 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:35.562989  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:35.590077  319485 main.go:141] libmachine: Using SSH client type: native
	I1018 12:18:35.590320  319485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 12:18:35.590341  319485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:18:36.705988  319485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:18:36.706031  319485 machine.go:96] duration metric: took 5.49637009s to provisionDockerMachine
	I1018 12:18:36.706047  319485 start.go:293] postStartSetup for "embed-certs-175371" (driver="docker")
	I1018 12:18:36.706060  319485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:18:36.706128  319485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:18:36.706190  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:36.727476  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:36.830826  319485 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:18:36.835533  319485 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:18:36.835569  319485 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:18:36.835584  319485 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:18:36.835636  319485 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:18:36.835707  319485 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:18:36.835829  319485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:18:36.846005  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:18:36.869811  319485 start.go:296] duration metric: took 163.746336ms for postStartSetup
	I1018 12:18:36.869902  319485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:18:36.869946  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:36.893357  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:36.997968  319485 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:18:37.004253  319485 fix.go:56] duration metric: took 6.104260841s for fixHost
	I1018 12:18:37.004285  319485 start.go:83] releasing machines lock for "embed-certs-175371", held for 6.104316695s
	I1018 12:18:37.004355  319485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-175371
	I1018 12:18:37.029349  319485 ssh_runner.go:195] Run: cat /version.json
	I1018 12:18:37.029412  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:37.029566  319485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:18:37.029633  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:37.054331  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:37.058158  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:37.158913  319485 ssh_runner.go:195] Run: systemctl --version
	I1018 12:18:37.235612  319485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:18:37.281675  319485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:18:37.287892  319485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:18:37.287969  319485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:18:37.298848  319485 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:18:37.298875  319485 start.go:495] detecting cgroup driver to use...
	I1018 12:18:37.298911  319485 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:18:37.298960  319485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:18:37.318507  319485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:18:37.335843  319485 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:18:37.335916  319485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:18:37.357159  319485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:18:37.373241  319485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:18:37.464197  319485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:18:37.557992  319485 docker.go:234] disabling docker service ...
	I1018 12:18:37.558064  319485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:18:37.573855  319485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:18:37.587606  319485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:18:37.677046  319485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:18:37.786485  319485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:18:37.800125  319485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:18:37.814639  319485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:18:37.814703  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.823696  319485 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:18:37.823802  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.833404  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.843440  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.852880  319485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:18:37.861252  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.870194  319485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.878686  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.887388  319485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:18:37.894731  319485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:18:37.902146  319485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:18:37.980625  319485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:18:38.435447  319485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:18:38.435521  319485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:18:38.439678  319485 start.go:563] Will wait 60s for crictl version
	I1018 12:18:38.439734  319485 ssh_runner.go:195] Run: which crictl
	I1018 12:18:38.443262  319485 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:18:38.467148  319485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:18:38.467213  319485 ssh_runner.go:195] Run: crio --version
	I1018 12:18:38.495216  319485 ssh_runner.go:195] Run: crio --version
	I1018 12:18:38.525571  319485 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1018 12:18:35.846564  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:38.345142  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:38.527068  319485 cli_runner.go:164] Run: docker network inspect embed-certs-175371 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:18:38.546516  319485 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 12:18:38.550993  319485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:18:38.561695  319485 kubeadm.go:883] updating cluster {Name:embed-certs-175371 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:18:38.561845  319485 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:38.561901  319485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:18:38.598535  319485 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:18:38.598563  319485 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:18:38.598618  319485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:18:38.630421  319485 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:18:38.630442  319485 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:18:38.630450  319485 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 12:18:38.630539  319485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-175371 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:18:38.630598  319485 ssh_runner.go:195] Run: crio config
	I1018 12:18:38.679497  319485 cni.go:84] Creating CNI manager for ""
	I1018 12:18:38.679521  319485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:18:38.679539  319485 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:18:38.679558  319485 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-175371 NodeName:embed-certs-175371 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:18:38.679684  319485 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-175371"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:18:38.679753  319485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:18:38.689079  319485 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:18:38.689144  319485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:18:38.697752  319485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 12:18:38.712315  319485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:18:38.726955  319485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 12:18:38.742413  319485 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:18:38.747169  319485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:18:38.758198  319485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:18:38.854804  319485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:18:38.876145  319485 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371 for IP: 192.168.76.2
	I1018 12:18:38.876167  319485 certs.go:195] generating shared ca certs ...
	I1018 12:18:38.876187  319485 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:38.876358  319485 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:18:38.876406  319485 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:18:38.876416  319485 certs.go:257] generating profile certs ...
	I1018 12:18:38.876507  319485 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/client.key
	I1018 12:18:38.876562  319485 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/apiserver.key.760612f0
	I1018 12:18:38.876613  319485 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/proxy-client.key
	I1018 12:18:38.876718  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem (1338 bytes)
	W1018 12:18:38.876744  319485 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360_empty.pem, impossibly tiny 0 bytes
	I1018 12:18:38.876751  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:18:38.876795  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:18:38.876824  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:18:38.876845  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 12:18:38.876882  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:18:38.877407  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:18:38.896628  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:18:38.916658  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:18:38.936639  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:18:38.960966  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 12:18:38.980170  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:18:38.997882  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:18:39.015725  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 12:18:39.032805  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /usr/share/ca-certificates/93602.pem (1708 bytes)
	I1018 12:18:39.049790  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:18:39.068080  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem --> /usr/share/ca-certificates/9360.pem (1338 bytes)
	I1018 12:18:39.086062  319485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:18:39.098810  319485 ssh_runner.go:195] Run: openssl version
	I1018 12:18:39.105009  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:18:39.113777  319485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:18:39.117712  319485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:18:39.117797  319485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:18:39.153127  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:18:39.162168  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9360.pem && ln -fs /usr/share/ca-certificates/9360.pem /etc/ssl/certs/9360.pem"
	I1018 12:18:39.171385  319485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9360.pem
	I1018 12:18:39.175469  319485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9360.pem
	I1018 12:18:39.175546  319485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9360.pem
	I1018 12:18:39.210362  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9360.pem /etc/ssl/certs/51391683.0"
	I1018 12:18:39.218971  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93602.pem && ln -fs /usr/share/ca-certificates/93602.pem /etc/ssl/certs/93602.pem"
	I1018 12:18:39.229154  319485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93602.pem
	I1018 12:18:39.233188  319485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/93602.pem
	I1018 12:18:39.233248  319485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93602.pem
	I1018 12:18:39.268526  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93602.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:18:39.276871  319485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:18:39.280846  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:18:39.315107  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:18:39.350704  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:18:39.387775  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:18:39.435187  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:18:39.475299  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:18:39.529584  319485 kubeadm.go:400] StartCluster: {Name:embed-certs-175371 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:39.529660  319485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:18:39.529707  319485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:18:39.572206  319485 cri.go:89] found id: "7eed71db702f71ba8ac1b3a4f95bf0e94d637c0237e59764412e0610aff6eddd"
	I1018 12:18:39.572238  319485 cri.go:89] found id: "8b43d4c98eba66467fa5b9aa2bd7f75a53d098d4dc11c9ca9578904769346b5e"
	I1018 12:18:39.572245  319485 cri.go:89] found id: "d82c539cae49915538e61bf60b7ade17e61db3edc660d10570b58552a6175d40"
	I1018 12:18:39.572250  319485 cri.go:89] found id: "a474582c739fed0fe5717b996a3fc2e3a1f0f913711f6e7f996ecc56104a314f"
	I1018 12:18:39.572255  319485 cri.go:89] found id: ""
	I1018 12:18:39.572310  319485 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 12:18:39.585733  319485 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:39Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:18:39.585815  319485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:18:39.594298  319485 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 12:18:39.594319  319485 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 12:18:39.594367  319485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 12:18:39.604664  319485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:18:39.605663  319485 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-175371" does not appear in /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:39.606304  319485 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-5865/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-175371" cluster setting kubeconfig missing "embed-certs-175371" context setting]
	I1018 12:18:39.607392  319485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:39.609392  319485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 12:18:39.617900  319485 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 12:18:39.617934  319485 kubeadm.go:601] duration metric: took 23.608426ms to restartPrimaryControlPlane
	I1018 12:18:39.617944  319485 kubeadm.go:402] duration metric: took 88.372405ms to StartCluster
	I1018 12:18:39.617961  319485 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:39.618027  319485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:39.620424  319485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:39.620686  319485 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:18:39.620787  319485 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:18:39.620892  319485 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-175371"
	I1018 12:18:39.620905  319485 addons.go:69] Setting dashboard=true in profile "embed-certs-175371"
	I1018 12:18:39.620954  319485 addons.go:238] Setting addon dashboard=true in "embed-certs-175371"
	W1018 12:18:39.620966  319485 addons.go:247] addon dashboard should already be in state true
	I1018 12:18:39.621000  319485 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:18:39.621038  319485 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:39.620915  319485 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-175371"
	W1018 12:18:39.621060  319485 addons.go:247] addon storage-provisioner should already be in state true
	I1018 12:18:39.621089  319485 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:18:39.620920  319485 addons.go:69] Setting default-storageclass=true in profile "embed-certs-175371"
	I1018 12:18:39.621185  319485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-175371"
	I1018 12:18:39.621523  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:39.621548  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:39.621562  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:39.623582  319485 out.go:179] * Verifying Kubernetes components...
	I1018 12:18:39.624890  319485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:18:39.647395  319485 addons.go:238] Setting addon default-storageclass=true in "embed-certs-175371"
	W1018 12:18:39.647416  319485 addons.go:247] addon default-storageclass should already be in state true
	I1018 12:18:39.647444  319485 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:18:39.647878  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:39.649378  319485 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:18:39.649377  319485 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 12:18:39.650859  319485 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:18:39.650877  319485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:18:39.650935  319485 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 12:18:39.650953  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:39.652294  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 12:18:39.652313  319485 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 12:18:39.652366  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:39.685481  319485 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:18:39.685508  319485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:18:39.685565  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:39.688909  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:39.691698  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:39.715793  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:39.776976  319485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:18:39.796702  319485 node_ready.go:35] waiting up to 6m0s for node "embed-certs-175371" to be "Ready" ...
	I1018 12:18:39.810215  319485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:18:39.810840  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 12:18:39.810861  319485 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 12:18:39.827587  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 12:18:39.827617  319485 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 12:18:39.832984  319485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:18:39.846934  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 12:18:39.846963  319485 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 12:18:39.866940  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 12:18:39.866963  319485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 12:18:39.884653  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 12:18:39.884676  319485 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 12:18:39.899737  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 12:18:39.899797  319485 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 12:18:39.914273  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 12:18:39.914304  319485 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 12:18:39.928891  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 12:18:39.928922  319485 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 12:18:39.941986  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:18:39.942011  319485 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 12:18:39.956234  319485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:18:41.376829  319485 node_ready.go:49] node "embed-certs-175371" is "Ready"
	I1018 12:18:41.376867  319485 node_ready.go:38] duration metric: took 1.579990475s for node "embed-certs-175371" to be "Ready" ...
	I1018 12:18:41.376885  319485 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:18:41.376941  319485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:18:41.913233  319485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.102983393s)
	I1018 12:18:41.913329  319485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.08031124s)
	I1018 12:18:41.913460  319485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.957177067s)
	I1018 12:18:41.913484  319485 api_server.go:72] duration metric: took 2.292768638s to wait for apiserver process to appear ...
	I1018 12:18:41.913497  319485 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:18:41.913526  319485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 12:18:41.918402  319485 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-175371 addons enable metrics-server
	
	I1018 12:18:41.919631  319485 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:18:41.919655  319485 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:18:41.925471  319485 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1018 12:18:40.346078  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:42.347310  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:41.927054  319485 addons.go:514] duration metric: took 2.306294485s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 12:18:42.413938  319485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 12:18:42.418439  319485 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:18:42.418474  319485 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:18:42.913848  319485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 12:18:42.918735  319485 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 12:18:42.919687  319485 api_server.go:141] control plane version: v1.34.1
	I1018 12:18:42.919718  319485 api_server.go:131] duration metric: took 1.006210574s to wait for apiserver health ...
	I1018 12:18:42.919726  319485 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:18:42.923301  319485 system_pods.go:59] 8 kube-system pods found
	I1018 12:18:42.923341  319485 system_pods.go:61] "coredns-66bc5c9577-b6h9l" [bf0c7f4f-476e-4faf-9159-580059735927] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:18:42.923353  319485 system_pods.go:61] "etcd-embed-certs-175371" [78ddf662-3465-4bf6-8514-500ccc419f56] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:42.923364  319485 system_pods.go:61] "kindnet-dxw8r" [c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:18:42.923373  319485 system_pods.go:61] "kube-apiserver-embed-certs-175371" [4357b213-beda-4ed7-b5b7-8a7ee35900fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:42.923383  319485 system_pods.go:61] "kube-controller-manager-embed-certs-175371" [5f063dc0-4c2c-434c-a534-54e2ca90614f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:42.923397  319485 system_pods.go:61] "kube-proxy-t2x4c" [9d5ade84-59a3-4948-ba28-a6663bd749ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:18:42.923409  319485 system_pods.go:61] "kube-scheduler-embed-certs-175371" [24ee0c7e-121d-42ff-ac1c-ce69f7cc6511] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:42.923448  319485 system_pods.go:61] "storage-provisioner" [d598f5a5-5d3e-4ad8-9266-ea4fee4648c7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:18:42.923466  319485 system_pods.go:74] duration metric: took 3.733114ms to wait for pod list to return data ...
	I1018 12:18:42.923476  319485 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:18:42.926029  319485 default_sa.go:45] found service account: "default"
	I1018 12:18:42.926061  319485 default_sa.go:55] duration metric: took 2.577664ms for default service account to be created ...
	I1018 12:18:42.926074  319485 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:18:42.929022  319485 system_pods.go:86] 8 kube-system pods found
	I1018 12:18:42.929049  319485 system_pods.go:89] "coredns-66bc5c9577-b6h9l" [bf0c7f4f-476e-4faf-9159-580059735927] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:18:42.929057  319485 system_pods.go:89] "etcd-embed-certs-175371" [78ddf662-3465-4bf6-8514-500ccc419f56] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:42.929063  319485 system_pods.go:89] "kindnet-dxw8r" [c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:18:42.929069  319485 system_pods.go:89] "kube-apiserver-embed-certs-175371" [4357b213-beda-4ed7-b5b7-8a7ee35900fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:42.929074  319485 system_pods.go:89] "kube-controller-manager-embed-certs-175371" [5f063dc0-4c2c-434c-a534-54e2ca90614f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:42.929079  319485 system_pods.go:89] "kube-proxy-t2x4c" [9d5ade84-59a3-4948-ba28-a6663bd749ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:18:42.929084  319485 system_pods.go:89] "kube-scheduler-embed-certs-175371" [24ee0c7e-121d-42ff-ac1c-ce69f7cc6511] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:42.929088  319485 system_pods.go:89] "storage-provisioner" [d598f5a5-5d3e-4ad8-9266-ea4fee4648c7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:18:42.929095  319485 system_pods.go:126] duration metric: took 3.016302ms to wait for k8s-apps to be running ...
	I1018 12:18:42.929105  319485 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:18:42.929153  319485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:42.942149  319485 system_svc.go:56] duration metric: took 13.033259ms WaitForService to wait for kubelet
	I1018 12:18:42.942182  319485 kubeadm.go:586] duration metric: took 3.321467327s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:18:42.942204  319485 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:18:42.944896  319485 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:18:42.944917  319485 node_conditions.go:123] node cpu capacity is 8
	I1018 12:18:42.944942  319485 node_conditions.go:105] duration metric: took 2.731777ms to run NodePressure ...
	I1018 12:18:42.944955  319485 start.go:241] waiting for startup goroutines ...
	I1018 12:18:42.944969  319485 start.go:246] waiting for cluster config update ...
	I1018 12:18:42.945001  319485 start.go:255] writing updated cluster config ...
	I1018 12:18:42.945268  319485 ssh_runner.go:195] Run: rm -f paused
	I1018 12:18:42.949454  319485 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:42.952932  319485 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b6h9l" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:18:44.959171  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 12:18:07 no-preload-406541 crio[559]: time="2025-10-18T12:18:07.358712239Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 12:18:07 no-preload-406541 crio[559]: time="2025-10-18T12:18:07.36397423Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 12:18:07 no-preload-406541 crio[559]: time="2025-10-18T12:18:07.364006855Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.53207242Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1956d3cd-a69b-4b95-a51d-6b6c48006c81 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.534749202Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4c577758-3061-4e9f-8a7b-36600decb5ef name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.53714424Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd/dashboard-metrics-scraper" id=954f2250-5f8f-46b1-bda0-edee95f398de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.539129126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.546073779Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.546523222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.569881828Z" level=info msg="Created container 2f228a114994354e92d8570f64381531a41496d20ad84389b5b4d0deb9fad3ec: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd/dashboard-metrics-scraper" id=954f2250-5f8f-46b1-bda0-edee95f398de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.570658838Z" level=info msg="Starting container: 2f228a114994354e92d8570f64381531a41496d20ad84389b5b4d0deb9fad3ec" id=5be04e4d-fb9b-4b0c-bffc-ddd25ae2de52 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.57276999Z" level=info msg="Started container" PID=1721 containerID=2f228a114994354e92d8570f64381531a41496d20ad84389b5b4d0deb9fad3ec description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd/dashboard-metrics-scraper id=5be04e4d-fb9b-4b0c-bffc-ddd25ae2de52 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3fd81679ea24313fceafc8d30b3cadcde2f77045a11cb34bd98a251f5b1dd9ab
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.637091448Z" level=info msg="Removing container: 40d8b49268b4f0034ac31674a0e02f3b940698ba2c663e566dd82c59132de030" id=88c22a34-453e-4630-a434-8fc2b950234c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:18:17 no-preload-406541 crio[559]: time="2025-10-18T12:18:17.649441238Z" level=info msg="Removed container 40d8b49268b4f0034ac31674a0e02f3b940698ba2c663e566dd82c59132de030: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd/dashboard-metrics-scraper" id=88c22a34-453e-4630-a434-8fc2b950234c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.66749735Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cf8f7179-6d9b-4d1c-94e4-d855eac9d7ea name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.668475288Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=68057a0f-53ed-4d27-9d98-2f6d02d18abb name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.669508611Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=44a78e76-b11f-42c2-b3a4-c69cc3dfc3ad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.669825725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.674786763Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.674988539Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a088c686830c0cb6a2e001facf5dc5fc70db4b47a1bbd5f1a8cb13100c8ba1aa/merged/etc/passwd: no such file or directory"
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.675167707Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a088c686830c0cb6a2e001facf5dc5fc70db4b47a1bbd5f1a8cb13100c8ba1aa/merged/etc/group: no such file or directory"
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.675500543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.704726386Z" level=info msg="Created container 62d512662ad1ee0b6a671a7817864180d3148e6813aaeaa115a934796a423076: kube-system/storage-provisioner/storage-provisioner" id=44a78e76-b11f-42c2-b3a4-c69cc3dfc3ad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.705435219Z" level=info msg="Starting container: 62d512662ad1ee0b6a671a7817864180d3148e6813aaeaa115a934796a423076" id=5c5b03b4-b46d-4e8b-af7f-161ca2137ea2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:18:27 no-preload-406541 crio[559]: time="2025-10-18T12:18:27.707369246Z" level=info msg="Started container" PID=1735 containerID=62d512662ad1ee0b6a671a7817864180d3148e6813aaeaa115a934796a423076 description=kube-system/storage-provisioner/storage-provisioner id=5c5b03b4-b46d-4e8b-af7f-161ca2137ea2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=077f82c17428529e98ecd94f00ba0ade8eb40352ad1722a71e470aebfe5b3482
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	62d512662ad1e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   077f82c174285       storage-provisioner                          kube-system
	2f228a1149943       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago      Exited              dashboard-metrics-scraper   2                   3fd81679ea243       dashboard-metrics-scraper-6ffb444bf9-q6bfd   kubernetes-dashboard
	d8afd7c12527a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   60739b9f5674a       kubernetes-dashboard-855c9754f9-v6qwc        kubernetes-dashboard
	bf4962a6a3ad2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   6e80cd756af60       coredns-66bc5c9577-bwvrq                     kube-system
	7343005218c69       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   f418e4a9de4e1       busybox                                      default
	40786b0420f7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   077f82c174285       storage-provisioner                          kube-system
	9b0a2248d2179       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   cc78454a95463       kube-proxy-9vbmr                             kube-system
	eeb9a7b0a2689       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   a6a81b438806d       kindnet-dwg7c                                kube-system
	5d618e751f9ba       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   bb80e4919842a       kube-controller-manager-no-preload-406541    kube-system
	133fd0664569c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   65379f445ed6e       kube-apiserver-no-preload-406541             kube-system
	37d2f600fcf0c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   c4161cb2bfae2       etcd-no-preload-406541                       kube-system
	786f9a8bc0ec9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   4f3e6836f52b4       kube-scheduler-no-preload-406541             kube-system
	
	
	==> coredns [bf4962a6a3ad256176dfa5ae96b9a87a6ed571246e8433b9f043ab17f752c961] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45175 - 55704 "HINFO IN 3551838433391856392.3047988239489226815. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.431724226s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-406541
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-406541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=no-preload-406541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_16_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:16:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-406541
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:18:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:18:26 +0000   Sat, 18 Oct 2025 12:16:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:18:26 +0000   Sat, 18 Oct 2025 12:16:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:18:26 +0000   Sat, 18 Oct 2025 12:16:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:18:26 +0000   Sat, 18 Oct 2025 12:17:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-406541
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                3289e84c-c9b3-408a-9f62-dbb3085e7d17
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-bwvrq                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-no-preload-406541                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-dwg7c                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-no-preload-406541              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-406541     200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-9vbmr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-no-preload-406541              100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-q6bfd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-v6qwc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  117s (x8 over 117s)  kubelet          Node no-preload-406541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 117s)  kubelet          Node no-preload-406541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x8 over 117s)  kubelet          Node no-preload-406541 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     111s                 kubelet          Node no-preload-406541 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  111s                 kubelet          Node no-preload-406541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s                 kubelet          Node no-preload-406541 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node no-preload-406541 event: Registered Node no-preload-406541 in Controller
	  Normal  NodeReady                93s                  kubelet          Node no-preload-406541 status is now: NodeReady
	  Normal  Starting                 56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)    kubelet          Node no-preload-406541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node no-preload-406541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)    kubelet          Node no-preload-406541 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                  node-controller  Node no-preload-406541 event: Registered Node no-preload-406541 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +11.948953] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 93 07 de 40 6d 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 2f a5 3a 37 fc 08 06
	[  +0.204454] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 4b 47 1f ce e5 08 06
	[Oct18 12:16] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 88 62 1b dd a7 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 f1 aa 42 b3 1d 08 06
	[  +0.000901] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +26.035563] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 9e 15 3f 0e e1 08 06
	[  +0.000631] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 55 46 ae a1 7f 08 06
	[  +2.492998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 63 10 7e 7b f1 08 06
	[  +0.001695] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	[ +18.118461] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e eb 77 72 c6 18 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	
	
	==> etcd [37d2f600fcf0c009e16115908271757cab49845434c4b2db0ade3132da9aaff7] <==
	{"level":"warn","ts":"2025-10-18T12:17:55.219703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.228681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.236569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.243438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.250504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.257868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.265089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.272619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.278977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.285454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.292087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.299242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.306992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.313615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.320879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.328033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.335802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.343238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.351344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.358091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.371012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.375238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.382430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.389897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:55.438223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34104","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:18:49 up  1:01,  0 user,  load average: 3.85, 4.05, 2.62
	Linux no-preload-406541 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eeb9a7b0a2689ceb5e5446d2d318c44949119ed381f76cb943c969ada5e7480d] <==
	I1018 12:17:57.080243       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:17:57.139636       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 12:17:57.139884       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:17:57.139907       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:17:57.139931       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:17:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:17:57.343731       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:17:57.344385       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:17:57.344427       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:17:57.344538       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:17:57.645288       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:17:57.645317       1 metrics.go:72] Registering metrics
	I1018 12:17:57.645414       1 controller.go:711] "Syncing nftables rules"
	I1018 12:18:07.343849       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 12:18:07.343932       1 main.go:301] handling current node
	I1018 12:18:17.349839       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 12:18:17.349877       1 main.go:301] handling current node
	I1018 12:18:27.344211       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 12:18:27.344246       1 main.go:301] handling current node
	I1018 12:18:37.349849       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 12:18:37.349891       1 main.go:301] handling current node
	I1018 12:18:47.352954       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 12:18:47.353006       1 main.go:301] handling current node
	
	
	==> kube-apiserver [133fd0664569cae2a09912a39da9ebed72def40b96fa66996c7f6cbd105deba3] <==
	I1018 12:17:55.898403       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 12:17:55.898417       1 policy_source.go:240] refreshing policies
	I1018 12:17:55.898493       1 aggregator.go:171] initial CRD sync complete...
	I1018 12:17:55.898501       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 12:17:55.898507       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 12:17:55.898513       1 cache.go:39] Caches are synced for autoregister controller
	I1018 12:17:55.898541       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 12:17:55.898680       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 12:17:55.898719       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 12:17:55.898714       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 12:17:55.907349       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 12:17:55.908799       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 12:17:55.919518       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:17:55.922140       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 12:17:56.154775       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:17:56.184152       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:17:56.208208       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:17:56.215214       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:17:56.223273       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:17:56.255684       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.39.19"}
	I1018 12:17:56.266307       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.67.249"}
	I1018 12:17:56.802301       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:17:59.642296       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:17:59.692357       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:17:59.791610       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5d618e751f9ba92d0e9b73cc902c60091fa7fc312b17c0a534306ddf5267331e] <==
	I1018 12:17:59.199598       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 12:17:59.211022       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 12:17:59.213295       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 12:17:59.237619       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 12:17:59.237648       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 12:17:59.237627       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 12:17:59.237803       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 12:17:59.237839       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 12:17:59.239088       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 12:17:59.239132       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 12:17:59.239148       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:17:59.239186       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 12:17:59.239198       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 12:17:59.239302       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 12:17:59.239205       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 12:17:59.245457       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 12:17:59.246660       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:17:59.247803       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 12:17:59.251063       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 12:17:59.255383       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 12:17:59.272583       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:17:59.280966       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:17:59.280991       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:17:59.281006       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:17:59.281218       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9b0a2248d2179ef0842e69ec0fb3d1c0118e01bfa03af00785477b05bbf28109] <==
	I1018 12:17:56.930009       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:17:56.983092       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:17:57.083986       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:17:57.084013       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 12:17:57.084110       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:17:57.103278       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:17:57.103344       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:17:57.108775       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:17:57.109181       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:17:57.109199       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:17:57.110639       1 config.go:200] "Starting service config controller"
	I1018 12:17:57.110660       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:17:57.110817       1 config.go:309] "Starting node config controller"
	I1018 12:17:57.110837       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:17:57.110893       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:17:57.110908       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:17:57.110941       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:17:57.110946       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:17:57.210827       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:17:57.211910       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:17:57.211925       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:17:57.211964       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [786f9a8bc0ec93e60a032d4b983f3c3c2cd05a95a06cfa33a7e7a81ed64a5f13] <==
	I1018 12:17:54.495951       1 serving.go:386] Generated self-signed cert in-memory
	W1018 12:17:55.832513       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 12:17:55.832679       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 12:17:55.832739       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 12:17:55.832968       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 12:17:55.866687       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 12:17:55.866720       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:17:55.869481       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:17:55.869528       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:17:55.869824       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:17:55.869912       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 12:17:55.970627       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:17:59 no-preload-406541 kubelet[699]: I1018 12:17:59.849387     699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wp88\" (UniqueName: \"kubernetes.io/projected/8332edef-a3c6-4f80-a2dd-eacb94b7a43b-kube-api-access-2wp88\") pod \"dashboard-metrics-scraper-6ffb444bf9-q6bfd\" (UID: \"8332edef-a3c6-4f80-a2dd-eacb94b7a43b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd"
	Oct 18 12:18:00 no-preload-406541 kubelet[699]: I1018 12:18:00.294566     699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 12:18:02 no-preload-406541 kubelet[699]: I1018 12:18:02.595693     699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd" podStartSLOduration=1.184150265 podStartE2EDuration="3.595668036s" podCreationTimestamp="2025-10-18 12:17:59 +0000 UTC" firstStartedPulling="2025-10-18 12:18:00.09795434 +0000 UTC m=+6.677626813" lastFinishedPulling="2025-10-18 12:18:02.509472038 +0000 UTC m=+9.089144584" observedRunningTime="2025-10-18 12:18:02.595478007 +0000 UTC m=+9.175150486" watchObservedRunningTime="2025-10-18 12:18:02.595668036 +0000 UTC m=+9.175340515"
	Oct 18 12:18:03 no-preload-406541 kubelet[699]: I1018 12:18:03.588061     699 scope.go:117] "RemoveContainer" containerID="c289f37a70c40c4cd56f631f49a6bf157b473ceafeba46a5e311ef1bd7f41d5a"
	Oct 18 12:18:04 no-preload-406541 kubelet[699]: I1018 12:18:04.592851     699 scope.go:117] "RemoveContainer" containerID="c289f37a70c40c4cd56f631f49a6bf157b473ceafeba46a5e311ef1bd7f41d5a"
	Oct 18 12:18:04 no-preload-406541 kubelet[699]: I1018 12:18:04.593003     699 scope.go:117] "RemoveContainer" containerID="40d8b49268b4f0034ac31674a0e02f3b940698ba2c663e566dd82c59132de030"
	Oct 18 12:18:04 no-preload-406541 kubelet[699]: E1018 12:18:04.593217     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6bfd_kubernetes-dashboard(8332edef-a3c6-4f80-a2dd-eacb94b7a43b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd" podUID="8332edef-a3c6-4f80-a2dd-eacb94b7a43b"
	Oct 18 12:18:05 no-preload-406541 kubelet[699]: I1018 12:18:05.594704     699 scope.go:117] "RemoveContainer" containerID="40d8b49268b4f0034ac31674a0e02f3b940698ba2c663e566dd82c59132de030"
	Oct 18 12:18:05 no-preload-406541 kubelet[699]: E1018 12:18:05.594928     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6bfd_kubernetes-dashboard(8332edef-a3c6-4f80-a2dd-eacb94b7a43b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd" podUID="8332edef-a3c6-4f80-a2dd-eacb94b7a43b"
	Oct 18 12:18:06 no-preload-406541 kubelet[699]: I1018 12:18:06.602341     699 scope.go:117] "RemoveContainer" containerID="40d8b49268b4f0034ac31674a0e02f3b940698ba2c663e566dd82c59132de030"
	Oct 18 12:18:06 no-preload-406541 kubelet[699]: E1018 12:18:06.603248     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6bfd_kubernetes-dashboard(8332edef-a3c6-4f80-a2dd-eacb94b7a43b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd" podUID="8332edef-a3c6-4f80-a2dd-eacb94b7a43b"
	Oct 18 12:18:06 no-preload-406541 kubelet[699]: I1018 12:18:06.958078     699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v6qwc" podStartSLOduration=2.335113866 podStartE2EDuration="7.958051103s" podCreationTimestamp="2025-10-18 12:17:59 +0000 UTC" firstStartedPulling="2025-10-18 12:18:00.098435935 +0000 UTC m=+6.678108412" lastFinishedPulling="2025-10-18 12:18:05.721373177 +0000 UTC m=+12.301045649" observedRunningTime="2025-10-18 12:18:06.619475972 +0000 UTC m=+13.199148451" watchObservedRunningTime="2025-10-18 12:18:06.958051103 +0000 UTC m=+13.537723596"
	Oct 18 12:18:17 no-preload-406541 kubelet[699]: I1018 12:18:17.531588     699 scope.go:117] "RemoveContainer" containerID="40d8b49268b4f0034ac31674a0e02f3b940698ba2c663e566dd82c59132de030"
	Oct 18 12:18:17 no-preload-406541 kubelet[699]: I1018 12:18:17.635799     699 scope.go:117] "RemoveContainer" containerID="40d8b49268b4f0034ac31674a0e02f3b940698ba2c663e566dd82c59132de030"
	Oct 18 12:18:17 no-preload-406541 kubelet[699]: I1018 12:18:17.636001     699 scope.go:117] "RemoveContainer" containerID="2f228a114994354e92d8570f64381531a41496d20ad84389b5b4d0deb9fad3ec"
	Oct 18 12:18:17 no-preload-406541 kubelet[699]: E1018 12:18:17.636270     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6bfd_kubernetes-dashboard(8332edef-a3c6-4f80-a2dd-eacb94b7a43b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd" podUID="8332edef-a3c6-4f80-a2dd-eacb94b7a43b"
	Oct 18 12:18:26 no-preload-406541 kubelet[699]: I1018 12:18:26.143446     699 scope.go:117] "RemoveContainer" containerID="2f228a114994354e92d8570f64381531a41496d20ad84389b5b4d0deb9fad3ec"
	Oct 18 12:18:26 no-preload-406541 kubelet[699]: E1018 12:18:26.143669     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6bfd_kubernetes-dashboard(8332edef-a3c6-4f80-a2dd-eacb94b7a43b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd" podUID="8332edef-a3c6-4f80-a2dd-eacb94b7a43b"
	Oct 18 12:18:27 no-preload-406541 kubelet[699]: I1018 12:18:27.667029     699 scope.go:117] "RemoveContainer" containerID="40786b0420f7a144665a1f103ad3f606cd6cabf7bf47ebe88741837fb573232b"
	Oct 18 12:18:37 no-preload-406541 kubelet[699]: I1018 12:18:37.531542     699 scope.go:117] "RemoveContainer" containerID="2f228a114994354e92d8570f64381531a41496d20ad84389b5b4d0deb9fad3ec"
	Oct 18 12:18:37 no-preload-406541 kubelet[699]: E1018 12:18:37.531819     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6bfd_kubernetes-dashboard(8332edef-a3c6-4f80-a2dd-eacb94b7a43b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6bfd" podUID="8332edef-a3c6-4f80-a2dd-eacb94b7a43b"
	Oct 18 12:18:43 no-preload-406541 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 12:18:43 no-preload-406541 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 12:18:43 no-preload-406541 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 12:18:43 no-preload-406541 systemd[1]: kubelet.service: Consumed 1.714s CPU time.
	
	
	==> kubernetes-dashboard [d8afd7c12527a3cd1abb0b05cf7514d555f1c3d34293776ee0abc22dfa7847ed] <==
	2025/10/18 12:18:05 Starting overwatch
	2025/10/18 12:18:05 Using namespace: kubernetes-dashboard
	2025/10/18 12:18:05 Using in-cluster config to connect to apiserver
	2025/10/18 12:18:05 Using secret token for csrf signing
	2025/10/18 12:18:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 12:18:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 12:18:05 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 12:18:05 Generating JWE encryption key
	2025/10/18 12:18:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 12:18:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 12:18:05 Initializing JWE encryption key from synchronized object
	2025/10/18 12:18:05 Creating in-cluster Sidecar client
	2025/10/18 12:18:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 12:18:05 Serving insecurely on HTTP port: 9090
	2025/10/18 12:18:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [40786b0420f7a144665a1f103ad3f606cd6cabf7bf47ebe88741837fb573232b] <==
	I1018 12:17:56.896574       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 12:18:26.900125       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [62d512662ad1ee0b6a671a7817864180d3148e6813aaeaa115a934796a423076] <==
	I1018 12:18:27.726361       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 12:18:27.735271       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 12:18:27.735322       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 12:18:27.737967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:31.193613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:35.454668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:39.053245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:42.106616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:45.129826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:45.134922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:18:45.135088       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 12:18:45.135234       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-406541_df1d8eaf-12f1-41c4-b2dd-ddeb45a44384!
	I1018 12:18:45.135273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bf0d3988-5bf7-437b-a187-0fa2d27fb75f", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-406541_df1d8eaf-12f1-41c4-b2dd-ddeb45a44384 became leader
	W1018 12:18:45.138952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:45.143956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:18:45.235730       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-406541_df1d8eaf-12f1-41c4-b2dd-ddeb45a44384!
	W1018 12:18:47.148318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:47.153594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:49.158069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:18:49.165247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-406541 -n no-preload-406541
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-406541 -n no-preload-406541: exit status 2 (409.409924ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-406541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-024443 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-024443 --alsologtostderr -v=1: exit status 80 (2.259304993s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-024443 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:18:44.041061  322338 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:18:44.041305  322338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:44.041315  322338 out.go:374] Setting ErrFile to fd 2...
	I1018 12:18:44.041320  322338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:44.041542  322338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:18:44.041790  322338 out.go:368] Setting JSON to false
	I1018 12:18:44.041830  322338 mustload.go:65] Loading cluster: old-k8s-version-024443
	I1018 12:18:44.042160  322338 config.go:182] Loaded profile config "old-k8s-version-024443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 12:18:44.042514  322338 cli_runner.go:164] Run: docker container inspect old-k8s-version-024443 --format={{.State.Status}}
	I1018 12:18:44.061344  322338 host.go:66] Checking if "old-k8s-version-024443" exists ...
	I1018 12:18:44.061642  322338 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:44.121419  322338 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-18 12:18:44.1099971 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:44.122392  322338 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-024443 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 12:18:44.124275  322338 out.go:179] * Pausing node old-k8s-version-024443 ... 
	I1018 12:18:44.125416  322338 host.go:66] Checking if "old-k8s-version-024443" exists ...
	I1018 12:18:44.125691  322338 ssh_runner.go:195] Run: systemctl --version
	I1018 12:18:44.125732  322338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-024443
	I1018 12:18:44.143582  322338 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/old-k8s-version-024443/id_rsa Username:docker}
	I1018 12:18:44.239634  322338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:44.261546  322338 pause.go:52] kubelet running: true
	I1018 12:18:44.261634  322338 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:18:44.451835  322338 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:18:44.451933  322338 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:18:44.522241  322338 cri.go:89] found id: "247925a32df258cd29376583f360c15f442b55a9f1a8b643d4538383ac9c74a7"
	I1018 12:18:44.522272  322338 cri.go:89] found id: "d7cc7969f8959a73ae35786fd5ff767a8bfa2ebbac51d066ef36cdfed10301be"
	I1018 12:18:44.522276  322338 cri.go:89] found id: "1a759c1022fc648d15de94f7193598eb07b5a7f318b6e11d24a4702d3ec03b78"
	I1018 12:18:44.522280  322338 cri.go:89] found id: "284392573f4ad6f3703725c92028a746af8799850cd474e5b9d2167b610c0589"
	I1018 12:18:44.522283  322338 cri.go:89] found id: "698a48720393a674c29dfc41bbf1f15059de251c55cf7701f06cd21dd31b76d4"
	I1018 12:18:44.522288  322338 cri.go:89] found id: "c1618cf2491e60c5f264f84236c3e565212efb40b779ad4dfc51547e5f21be79"
	I1018 12:18:44.522292  322338 cri.go:89] found id: "b9fd7b97fe26af7875425214d9a97dc3856195255cc6b76a7313c710605084a3"
	I1018 12:18:44.522296  322338 cri.go:89] found id: "c664320629fb594f08d0b5b11b435430f4ed28eaed8d94b8f5952428aa171a2f"
	I1018 12:18:44.522299  322338 cri.go:89] found id: "cd847940cd839a77a7dd6283540c50c9b5c0f1ec5b64bfe2ed49728cb0998923"
	I1018 12:18:44.522313  322338 cri.go:89] found id: "8b3e716afde9f48058617565b8e95c5e8259830581a273cf2d765c1152eb3ffd"
	I1018 12:18:44.522316  322338 cri.go:89] found id: "7639427c91a82a37b0a5b9d91dc9de5ccbb5db91445889266a268aaf57c64ddb"
	I1018 12:18:44.522319  322338 cri.go:89] found id: ""
	I1018 12:18:44.522364  322338 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:18:44.535150  322338 retry.go:31] will retry after 242.164105ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:44Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:18:44.777623  322338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:44.793651  322338 pause.go:52] kubelet running: false
	I1018 12:18:44.793703  322338 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:18:44.947341  322338 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:18:44.947434  322338 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:18:45.021658  322338 cri.go:89] found id: "247925a32df258cd29376583f360c15f442b55a9f1a8b643d4538383ac9c74a7"
	I1018 12:18:45.021687  322338 cri.go:89] found id: "d7cc7969f8959a73ae35786fd5ff767a8bfa2ebbac51d066ef36cdfed10301be"
	I1018 12:18:45.021694  322338 cri.go:89] found id: "1a759c1022fc648d15de94f7193598eb07b5a7f318b6e11d24a4702d3ec03b78"
	I1018 12:18:45.021700  322338 cri.go:89] found id: "284392573f4ad6f3703725c92028a746af8799850cd474e5b9d2167b610c0589"
	I1018 12:18:45.021705  322338 cri.go:89] found id: "698a48720393a674c29dfc41bbf1f15059de251c55cf7701f06cd21dd31b76d4"
	I1018 12:18:45.021711  322338 cri.go:89] found id: "c1618cf2491e60c5f264f84236c3e565212efb40b779ad4dfc51547e5f21be79"
	I1018 12:18:45.021715  322338 cri.go:89] found id: "b9fd7b97fe26af7875425214d9a97dc3856195255cc6b76a7313c710605084a3"
	I1018 12:18:45.021720  322338 cri.go:89] found id: "c664320629fb594f08d0b5b11b435430f4ed28eaed8d94b8f5952428aa171a2f"
	I1018 12:18:45.021724  322338 cri.go:89] found id: "cd847940cd839a77a7dd6283540c50c9b5c0f1ec5b64bfe2ed49728cb0998923"
	I1018 12:18:45.021744  322338 cri.go:89] found id: "8b3e716afde9f48058617565b8e95c5e8259830581a273cf2d765c1152eb3ffd"
	I1018 12:18:45.021754  322338 cri.go:89] found id: "7639427c91a82a37b0a5b9d91dc9de5ccbb5db91445889266a268aaf57c64ddb"
	I1018 12:18:45.021791  322338 cri.go:89] found id: ""
	I1018 12:18:45.021829  322338 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:18:45.034043  322338 retry.go:31] will retry after 207.420488ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:45Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:18:45.242553  322338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:45.255491  322338 pause.go:52] kubelet running: false
	I1018 12:18:45.255541  322338 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:18:45.400515  322338 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:18:45.400573  322338 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:18:45.471383  322338 cri.go:89] found id: "247925a32df258cd29376583f360c15f442b55a9f1a8b643d4538383ac9c74a7"
	I1018 12:18:45.471403  322338 cri.go:89] found id: "d7cc7969f8959a73ae35786fd5ff767a8bfa2ebbac51d066ef36cdfed10301be"
	I1018 12:18:45.471407  322338 cri.go:89] found id: "1a759c1022fc648d15de94f7193598eb07b5a7f318b6e11d24a4702d3ec03b78"
	I1018 12:18:45.471411  322338 cri.go:89] found id: "284392573f4ad6f3703725c92028a746af8799850cd474e5b9d2167b610c0589"
	I1018 12:18:45.471414  322338 cri.go:89] found id: "698a48720393a674c29dfc41bbf1f15059de251c55cf7701f06cd21dd31b76d4"
	I1018 12:18:45.471417  322338 cri.go:89] found id: "c1618cf2491e60c5f264f84236c3e565212efb40b779ad4dfc51547e5f21be79"
	I1018 12:18:45.471420  322338 cri.go:89] found id: "b9fd7b97fe26af7875425214d9a97dc3856195255cc6b76a7313c710605084a3"
	I1018 12:18:45.471422  322338 cri.go:89] found id: "c664320629fb594f08d0b5b11b435430f4ed28eaed8d94b8f5952428aa171a2f"
	I1018 12:18:45.471424  322338 cri.go:89] found id: "cd847940cd839a77a7dd6283540c50c9b5c0f1ec5b64bfe2ed49728cb0998923"
	I1018 12:18:45.471430  322338 cri.go:89] found id: "8b3e716afde9f48058617565b8e95c5e8259830581a273cf2d765c1152eb3ffd"
	I1018 12:18:45.471435  322338 cri.go:89] found id: "7639427c91a82a37b0a5b9d91dc9de5ccbb5db91445889266a268aaf57c64ddb"
	I1018 12:18:45.471439  322338 cri.go:89] found id: ""
	I1018 12:18:45.471485  322338 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:18:45.483718  322338 retry.go:31] will retry after 383.077637ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:45Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:18:45.867356  322338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:45.886935  322338 pause.go:52] kubelet running: false
	I1018 12:18:45.886995  322338 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:18:46.106638  322338 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:18:46.106893  322338 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:18:46.210949  322338 cri.go:89] found id: "247925a32df258cd29376583f360c15f442b55a9f1a8b643d4538383ac9c74a7"
	I1018 12:18:46.210975  322338 cri.go:89] found id: "d7cc7969f8959a73ae35786fd5ff767a8bfa2ebbac51d066ef36cdfed10301be"
	I1018 12:18:46.210980  322338 cri.go:89] found id: "1a759c1022fc648d15de94f7193598eb07b5a7f318b6e11d24a4702d3ec03b78"
	I1018 12:18:46.210984  322338 cri.go:89] found id: "284392573f4ad6f3703725c92028a746af8799850cd474e5b9d2167b610c0589"
	I1018 12:18:46.210988  322338 cri.go:89] found id: "698a48720393a674c29dfc41bbf1f15059de251c55cf7701f06cd21dd31b76d4"
	I1018 12:18:46.210993  322338 cri.go:89] found id: "c1618cf2491e60c5f264f84236c3e565212efb40b779ad4dfc51547e5f21be79"
	I1018 12:18:46.210997  322338 cri.go:89] found id: "b9fd7b97fe26af7875425214d9a97dc3856195255cc6b76a7313c710605084a3"
	I1018 12:18:46.211000  322338 cri.go:89] found id: "c664320629fb594f08d0b5b11b435430f4ed28eaed8d94b8f5952428aa171a2f"
	I1018 12:18:46.211003  322338 cri.go:89] found id: "cd847940cd839a77a7dd6283540c50c9b5c0f1ec5b64bfe2ed49728cb0998923"
	I1018 12:18:46.211012  322338 cri.go:89] found id: "8b3e716afde9f48058617565b8e95c5e8259830581a273cf2d765c1152eb3ffd"
	I1018 12:18:46.211017  322338 cri.go:89] found id: "7639427c91a82a37b0a5b9d91dc9de5ccbb5db91445889266a268aaf57c64ddb"
	I1018 12:18:46.211021  322338 cri.go:89] found id: ""
	I1018 12:18:46.211088  322338 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:18:46.230110  322338 out.go:203] 
	W1018 12:18:46.231729  322338 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:18:46.231753  322338 out.go:285] * 
	* 
	W1018 12:18:46.238215  322338 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:18:46.239910  322338 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-024443 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-024443
helpers_test.go:243: (dbg) docker inspect old-k8s-version-024443:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570",
	        "Created": "2025-10-18T12:16:27.110733205Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309999,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:17:43.03153287Z",
	            "FinishedAt": "2025-10-18T12:17:41.87092059Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570/hosts",
	        "LogPath": "/var/lib/docker/containers/9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570/9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570-json.log",
	        "Name": "/old-k8s-version-024443",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-024443:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-024443",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570",
	                "LowerDir": "/var/lib/docker/overlay2/7cecfc4c0113fa8f9c857128b1d2593c3e1dff65b374e90a3423a5349a0fc7ff-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7cecfc4c0113fa8f9c857128b1d2593c3e1dff65b374e90a3423a5349a0fc7ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7cecfc4c0113fa8f9c857128b1d2593c3e1dff65b374e90a3423a5349a0fc7ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7cecfc4c0113fa8f9c857128b1d2593c3e1dff65b374e90a3423a5349a0fc7ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-024443",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-024443/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-024443",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-024443",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-024443",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c4077dd60b5a23f9638f5f1d9db9ee26ce8f067c60547e3755b5892713d0be18",
	            "SandboxKey": "/var/run/docker/netns/c4077dd60b5a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-024443": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:3b:07:46:28:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "704be5e99155d09cbf122649ccef6bb6653fc58dfc14bb6d440e5291162e7e3c",
	                    "EndpointID": "15d4c018851341f8eb5a9c5dad47746ef36d41417a0c2849beeb5bacedb0c5c4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-024443",
	                        "9b192bc9f9a7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-024443 -n old-k8s-version-024443
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-024443 -n old-k8s-version-024443: exit status 2 (399.010126ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-024443 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-024443 logs -n 25: (1.396965435s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-376567 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo crio config                                                                                                                                                                                                             │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ delete  │ -p bridge-376567                                                                                                                                                                                                                              │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ delete  │ -p disable-driver-mounts-200198                                                                                                                                                                                                               │ disable-driver-mounts-200198 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-024443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p old-k8s-version-024443 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-406541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p no-preload-406541 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-024443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p old-k8s-version-024443 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable dashboard -p no-preload-406541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p no-preload-406541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-028309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-028309 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-175371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ stop    │ -p embed-certs-175371 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-028309 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-175371 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p embed-certs-175371 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ image   │ no-preload-406541 image list --format=json                                                                                                                                                                                                    │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p no-preload-406541 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ image   │ old-k8s-version-024443 image list --format=json                                                                                                                                                                                               │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p old-k8s-version-024443 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:18:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:18:30.700052  319485 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:18:30.700328  319485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:30.700338  319485 out.go:374] Setting ErrFile to fd 2...
	I1018 12:18:30.700342  319485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:30.700573  319485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:18:30.701112  319485 out.go:368] Setting JSON to false
	I1018 12:18:30.702451  319485 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3659,"bootTime":1760786252,"procs":428,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:18:30.702547  319485 start.go:141] virtualization: kvm guest
	I1018 12:18:30.704614  319485 out.go:179] * [embed-certs-175371] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:18:30.706016  319485 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:18:30.706038  319485 notify.go:220] Checking for updates...
	I1018 12:18:30.708920  319485 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:18:30.710890  319485 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:30.712258  319485 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:18:30.713409  319485 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:18:30.714965  319485 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:18:30.716835  319485 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:30.717456  319485 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:18:30.741640  319485 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:18:30.741748  319485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:30.802733  319485 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:18:30.790905861 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:30.802866  319485 docker.go:318] overlay module found
	I1018 12:18:30.805106  319485 out.go:179] * Using the docker driver based on existing profile
	W1018 12:18:26.415356  310517 pod_ready.go:104] pod "coredns-66bc5c9577-bwvrq" is not "Ready", error: <nil>
	W1018 12:18:28.908743  310517 pod_ready.go:104] pod "coredns-66bc5c9577-bwvrq" is not "Ready", error: <nil>
	I1018 12:18:30.410244  310517 pod_ready.go:94] pod "coredns-66bc5c9577-bwvrq" is "Ready"
	I1018 12:18:30.410272  310517 pod_ready.go:86] duration metric: took 33.006670577s for pod "coredns-66bc5c9577-bwvrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.413489  310517 pod_ready.go:83] waiting for pod "etcd-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.418087  310517 pod_ready.go:94] pod "etcd-no-preload-406541" is "Ready"
	I1018 12:18:30.418113  310517 pod_ready.go:86] duration metric: took 4.60176ms for pod "etcd-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.420752  310517 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.425914  310517 pod_ready.go:94] pod "kube-apiserver-no-preload-406541" is "Ready"
	I1018 12:18:30.425945  310517 pod_ready.go:86] duration metric: took 5.137183ms for pod "kube-apiserver-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.430423  310517 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.608129  310517 pod_ready.go:94] pod "kube-controller-manager-no-preload-406541" is "Ready"
	I1018 12:18:30.608164  310517 pod_ready.go:86] duration metric: took 177.709701ms for pod "kube-controller-manager-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.807461  310517 pod_ready.go:83] waiting for pod "kube-proxy-9vbmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.806468  319485 start.go:305] selected driver: docker
	I1018 12:18:30.806488  319485 start.go:925] validating driver "docker" against &{Name:embed-certs-175371 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:30.806613  319485 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:18:30.807410  319485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:30.867893  319485 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:18:30.856888749 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:30.868200  319485 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:18:30.868236  319485 cni.go:84] Creating CNI manager for ""
	I1018 12:18:30.868281  319485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:18:30.868319  319485 start.go:349] cluster config:
	{Name:embed-certs-175371 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:30.870215  319485 out.go:179] * Starting "embed-certs-175371" primary control-plane node in "embed-certs-175371" cluster
	I1018 12:18:30.871831  319485 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:18:30.873306  319485 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:18:30.874877  319485 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:30.874928  319485 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 12:18:30.874944  319485 cache.go:58] Caching tarball of preloaded images
	I1018 12:18:30.875010  319485 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:18:30.875066  319485 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:18:30.875078  319485 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:18:30.875220  319485 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/config.json ...
	I1018 12:18:30.899840  319485 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:18:30.899862  319485 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:18:30.899879  319485 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:18:30.899905  319485 start.go:360] acquireMachinesLock for embed-certs-175371: {Name:mk656d4acd5501b1836b6cdb3453deba417e2657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:18:30.899958  319485 start.go:364] duration metric: took 36.728µs to acquireMachinesLock for "embed-certs-175371"
	I1018 12:18:30.899976  319485 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:18:30.899983  319485 fix.go:54] fixHost starting: 
	I1018 12:18:30.900188  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:30.918592  319485 fix.go:112] recreateIfNeeded on embed-certs-175371: state=Stopped err=<nil>
	W1018 12:18:30.918622  319485 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:18:31.208253  310517 pod_ready.go:94] pod "kube-proxy-9vbmr" is "Ready"
	I1018 12:18:31.208285  310517 pod_ready.go:86] duration metric: took 400.799145ms for pod "kube-proxy-9vbmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.407677  310517 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.806754  310517 pod_ready.go:94] pod "kube-scheduler-no-preload-406541" is "Ready"
	I1018 12:18:31.806818  310517 pod_ready.go:86] duration metric: took 399.114489ms for pod "kube-scheduler-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.806829  310517 pod_ready.go:40] duration metric: took 34.407726613s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:31.854283  310517 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:18:31.855987  310517 out.go:179] * Done! kubectl is now configured to use "no-preload-406541" cluster and "default" namespace by default
	W1018 12:18:29.376596  309793 pod_ready.go:104] pod "coredns-5dd5756b68-s4wnq" is not "Ready", error: <nil>
	I1018 12:18:30.875552  309793 pod_ready.go:94] pod "coredns-5dd5756b68-s4wnq" is "Ready"
	I1018 12:18:30.875577  309793 pod_ready.go:86] duration metric: took 36.005408914s for pod "coredns-5dd5756b68-s4wnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.878359  309793 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.883038  309793 pod_ready.go:94] pod "etcd-old-k8s-version-024443" is "Ready"
	I1018 12:18:30.883061  309793 pod_ready.go:86] duration metric: took 4.681016ms for pod "etcd-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.886183  309793 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.890240  309793 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-024443" is "Ready"
	I1018 12:18:30.890262  309793 pod_ready.go:86] duration metric: took 4.059352ms for pod "kube-apiserver-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.893534  309793 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.074647  309793 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-024443" is "Ready"
	I1018 12:18:31.074685  309793 pod_ready.go:86] duration metric: took 181.128894ms for pod "kube-controller-manager-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.274861  309793 pod_ready.go:83] waiting for pod "kube-proxy-tzlpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.674522  309793 pod_ready.go:94] pod "kube-proxy-tzlpd" is "Ready"
	I1018 12:18:31.674555  309793 pod_ready.go:86] duration metric: took 399.668633ms for pod "kube-proxy-tzlpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.874734  309793 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:32.274153  309793 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-024443" is "Ready"
	I1018 12:18:32.274178  309793 pod_ready.go:86] duration metric: took 399.401101ms for pod "kube-scheduler-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:32.274188  309793 pod_ready.go:40] duration metric: took 37.409550626s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:32.318706  309793 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1018 12:18:32.320699  309793 out.go:203] 
	W1018 12:18:32.322350  309793 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 12:18:32.323906  309793 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 12:18:32.325540  309793 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-024443" cluster and "default" namespace by default
	I1018 12:18:29.298582  317167 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 12:18:29.303739  317167 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:18:29.303786  317167 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:18:29.797387  317167 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 12:18:29.802331  317167 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1018 12:18:29.803460  317167 api_server.go:141] control plane version: v1.34.1
	I1018 12:18:29.803483  317167 api_server.go:131] duration metric: took 1.00630107s to wait for apiserver health ...
	I1018 12:18:29.803491  317167 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:18:29.807265  317167 system_pods.go:59] 8 kube-system pods found
	I1018 12:18:29.807303  317167 system_pods.go:61] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:18:29.807319  317167 system_pods.go:61] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:29.807327  317167 system_pods.go:61] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:18:29.807333  317167 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:29.807341  317167 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:29.807349  317167 system_pods.go:61] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:18:29.807368  317167 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:29.807380  317167 system_pods.go:61] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Running
	I1018 12:18:29.807389  317167 system_pods.go:74] duration metric: took 3.891153ms to wait for pod list to return data ...
	I1018 12:18:29.807401  317167 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:18:29.810242  317167 default_sa.go:45] found service account: "default"
	I1018 12:18:29.810296  317167 default_sa.go:55] duration metric: took 2.860617ms for default service account to be created ...
	I1018 12:18:29.810306  317167 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:18:29.813451  317167 system_pods.go:86] 8 kube-system pods found
	I1018 12:18:29.813483  317167 system_pods.go:89] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:18:29.813490  317167 system_pods.go:89] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:29.813495  317167 system_pods.go:89] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:18:29.813500  317167 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:29.813506  317167 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:29.813509  317167 system_pods.go:89] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:18:29.813514  317167 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:29.813520  317167 system_pods.go:89] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Running
	I1018 12:18:29.813527  317167 system_pods.go:126] duration metric: took 3.216525ms to wait for k8s-apps to be running ...
	I1018 12:18:29.813536  317167 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:18:29.813576  317167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:29.827054  317167 system_svc.go:56] duration metric: took 13.51026ms WaitForService to wait for kubelet
	I1018 12:18:29.827080  317167 kubeadm.go:586] duration metric: took 3.447871394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:18:29.827097  317167 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:18:29.830363  317167 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:18:29.830389  317167 node_conditions.go:123] node cpu capacity is 8
	I1018 12:18:29.830401  317167 node_conditions.go:105] duration metric: took 3.29887ms to run NodePressure ...
	I1018 12:18:29.830412  317167 start.go:241] waiting for startup goroutines ...
	I1018 12:18:29.830418  317167 start.go:246] waiting for cluster config update ...
	I1018 12:18:29.830429  317167 start.go:255] writing updated cluster config ...
	I1018 12:18:29.830727  317167 ssh_runner.go:195] Run: rm -f paused
	I1018 12:18:29.835232  317167 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:29.839676  317167 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7qgqj" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:18:31.844958  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:33.845498  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:30.921314  319485 out.go:252] * Restarting existing docker container for "embed-certs-175371" ...
	I1018 12:18:30.921390  319485 cli_runner.go:164] Run: docker start embed-certs-175371
	I1018 12:18:31.169483  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:31.188693  319485 kic.go:430] container "embed-certs-175371" state is running.
	I1018 12:18:31.189103  319485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-175371
	I1018 12:18:31.209362  319485 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/config.json ...
	I1018 12:18:31.209641  319485 machine.go:93] provisionDockerMachine start ...
	I1018 12:18:31.209725  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:31.229147  319485 main.go:141] libmachine: Using SSH client type: native
	I1018 12:18:31.229379  319485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 12:18:31.229390  319485 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:18:31.229993  319485 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36872->127.0.0.1:33123: read: connection reset by peer
	I1018 12:18:34.383983  319485 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-175371
	
	I1018 12:18:34.384015  319485 ubuntu.go:182] provisioning hostname "embed-certs-175371"
	I1018 12:18:34.384079  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:34.407484  319485 main.go:141] libmachine: Using SSH client type: native
	I1018 12:18:34.407828  319485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 12:18:34.407850  319485 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-175371 && echo "embed-certs-175371" | sudo tee /etc/hostname
	I1018 12:18:34.571542  319485 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-175371
	
	I1018 12:18:34.571633  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:34.593919  319485 main.go:141] libmachine: Using SSH client type: native
	I1018 12:18:34.594233  319485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 12:18:34.594268  319485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-175371' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-175371/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-175371' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:18:34.745131  319485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:18:34.745165  319485 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:18:34.745187  319485 ubuntu.go:190] setting up certificates
	I1018 12:18:34.745200  319485 provision.go:84] configureAuth start
	I1018 12:18:34.745288  319485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-175371
	I1018 12:18:34.769316  319485 provision.go:143] copyHostCerts
	I1018 12:18:34.769395  319485 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:18:34.769421  319485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:18:34.769499  319485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:18:34.769623  319485 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:18:34.769630  319485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:18:34.769673  319485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:18:34.769842  319485 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:18:34.769853  319485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:18:34.769895  319485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:18:34.769991  319485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.embed-certs-175371 san=[127.0.0.1 192.168.76.2 embed-certs-175371 localhost minikube]
	I1018 12:18:35.347148  319485 provision.go:177] copyRemoteCerts
	I1018 12:18:35.347208  319485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:18:35.347243  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:35.368711  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:35.475696  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:18:35.507103  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 12:18:35.533969  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 12:18:35.562565  319485 provision.go:87] duration metric: took 817.346845ms to configureAuth
	I1018 12:18:35.562597  319485 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:18:35.562839  319485 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:35.562989  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:35.590077  319485 main.go:141] libmachine: Using SSH client type: native
	I1018 12:18:35.590320  319485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 12:18:35.590341  319485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:18:36.705988  319485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:18:36.706031  319485 machine.go:96] duration metric: took 5.49637009s to provisionDockerMachine
	I1018 12:18:36.706047  319485 start.go:293] postStartSetup for "embed-certs-175371" (driver="docker")
	I1018 12:18:36.706060  319485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:18:36.706128  319485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:18:36.706190  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:36.727476  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:36.830826  319485 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:18:36.835533  319485 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:18:36.835569  319485 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:18:36.835584  319485 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:18:36.835636  319485 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:18:36.835707  319485 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:18:36.835829  319485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:18:36.846005  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:18:36.869811  319485 start.go:296] duration metric: took 163.746336ms for postStartSetup
	I1018 12:18:36.869902  319485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:18:36.869946  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:36.893357  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:36.997968  319485 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:18:37.004253  319485 fix.go:56] duration metric: took 6.104260841s for fixHost
	I1018 12:18:37.004285  319485 start.go:83] releasing machines lock for "embed-certs-175371", held for 6.104316695s
	I1018 12:18:37.004355  319485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-175371
	I1018 12:18:37.029349  319485 ssh_runner.go:195] Run: cat /version.json
	I1018 12:18:37.029412  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:37.029566  319485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:18:37.029633  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:37.054331  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:37.058158  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:37.158913  319485 ssh_runner.go:195] Run: systemctl --version
	I1018 12:18:37.235612  319485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:18:37.281675  319485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:18:37.287892  319485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:18:37.287969  319485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:18:37.298848  319485 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:18:37.298875  319485 start.go:495] detecting cgroup driver to use...
	I1018 12:18:37.298911  319485 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:18:37.298960  319485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:18:37.318507  319485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:18:37.335843  319485 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:18:37.335916  319485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:18:37.357159  319485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:18:37.373241  319485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:18:37.464197  319485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:18:37.557992  319485 docker.go:234] disabling docker service ...
	I1018 12:18:37.558064  319485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:18:37.573855  319485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:18:37.587606  319485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:18:37.677046  319485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:18:37.786485  319485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:18:37.800125  319485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:18:37.814639  319485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:18:37.814703  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.823696  319485 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:18:37.823802  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.833404  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.843440  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.852880  319485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:18:37.861252  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.870194  319485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.878686  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.887388  319485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:18:37.894731  319485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:18:37.902146  319485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:18:37.980625  319485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:18:38.435447  319485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:18:38.435521  319485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:18:38.439678  319485 start.go:563] Will wait 60s for crictl version
	I1018 12:18:38.439734  319485 ssh_runner.go:195] Run: which crictl
	I1018 12:18:38.443262  319485 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:18:38.467148  319485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:18:38.467213  319485 ssh_runner.go:195] Run: crio --version
	I1018 12:18:38.495216  319485 ssh_runner.go:195] Run: crio --version
	I1018 12:18:38.525571  319485 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1018 12:18:35.846564  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:38.345142  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:38.527068  319485 cli_runner.go:164] Run: docker network inspect embed-certs-175371 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:18:38.546516  319485 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 12:18:38.550993  319485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:18:38.561695  319485 kubeadm.go:883] updating cluster {Name:embed-certs-175371 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:18:38.561845  319485 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:38.561901  319485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:18:38.598535  319485 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:18:38.598563  319485 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:18:38.598618  319485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:18:38.630421  319485 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:18:38.630442  319485 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:18:38.630450  319485 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 12:18:38.630539  319485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-175371 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:18:38.630598  319485 ssh_runner.go:195] Run: crio config
	I1018 12:18:38.679497  319485 cni.go:84] Creating CNI manager for ""
	I1018 12:18:38.679521  319485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:18:38.679539  319485 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:18:38.679558  319485 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-175371 NodeName:embed-certs-175371 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:18:38.679684  319485 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-175371"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:18:38.679753  319485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:18:38.689079  319485 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:18:38.689144  319485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:18:38.697752  319485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 12:18:38.712315  319485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:18:38.726955  319485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 12:18:38.742413  319485 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:18:38.747169  319485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:18:38.758198  319485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:18:38.854804  319485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:18:38.876145  319485 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371 for IP: 192.168.76.2
	I1018 12:18:38.876167  319485 certs.go:195] generating shared ca certs ...
	I1018 12:18:38.876187  319485 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:38.876358  319485 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:18:38.876406  319485 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:18:38.876416  319485 certs.go:257] generating profile certs ...
	I1018 12:18:38.876507  319485 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/client.key
	I1018 12:18:38.876562  319485 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/apiserver.key.760612f0
	I1018 12:18:38.876613  319485 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/proxy-client.key
	I1018 12:18:38.876718  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem (1338 bytes)
	W1018 12:18:38.876744  319485 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360_empty.pem, impossibly tiny 0 bytes
	I1018 12:18:38.876751  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:18:38.876795  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:18:38.876824  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:18:38.876845  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 12:18:38.876882  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:18:38.877407  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:18:38.896628  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:18:38.916658  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:18:38.936639  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:18:38.960966  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 12:18:38.980170  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:18:38.997882  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:18:39.015725  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 12:18:39.032805  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /usr/share/ca-certificates/93602.pem (1708 bytes)
	I1018 12:18:39.049790  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:18:39.068080  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem --> /usr/share/ca-certificates/9360.pem (1338 bytes)
	I1018 12:18:39.086062  319485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:18:39.098810  319485 ssh_runner.go:195] Run: openssl version
	I1018 12:18:39.105009  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:18:39.113777  319485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:18:39.117712  319485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:18:39.117797  319485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:18:39.153127  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:18:39.162168  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9360.pem && ln -fs /usr/share/ca-certificates/9360.pem /etc/ssl/certs/9360.pem"
	I1018 12:18:39.171385  319485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9360.pem
	I1018 12:18:39.175469  319485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9360.pem
	I1018 12:18:39.175546  319485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9360.pem
	I1018 12:18:39.210362  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9360.pem /etc/ssl/certs/51391683.0"
	I1018 12:18:39.218971  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93602.pem && ln -fs /usr/share/ca-certificates/93602.pem /etc/ssl/certs/93602.pem"
	I1018 12:18:39.229154  319485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93602.pem
	I1018 12:18:39.233188  319485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/93602.pem
	I1018 12:18:39.233248  319485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93602.pem
	I1018 12:18:39.268526  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93602.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:18:39.276871  319485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:18:39.280846  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:18:39.315107  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:18:39.350704  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:18:39.387775  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:18:39.435187  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:18:39.475299  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:18:39.529584  319485 kubeadm.go:400] StartCluster: {Name:embed-certs-175371 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:39.529660  319485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:18:39.529707  319485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:18:39.572206  319485 cri.go:89] found id: "7eed71db702f71ba8ac1b3a4f95bf0e94d637c0237e59764412e0610aff6eddd"
	I1018 12:18:39.572238  319485 cri.go:89] found id: "8b43d4c98eba66467fa5b9aa2bd7f75a53d098d4dc11c9ca9578904769346b5e"
	I1018 12:18:39.572245  319485 cri.go:89] found id: "d82c539cae49915538e61bf60b7ade17e61db3edc660d10570b58552a6175d40"
	I1018 12:18:39.572250  319485 cri.go:89] found id: "a474582c739fed0fe5717b996a3fc2e3a1f0f913711f6e7f996ecc56104a314f"
	I1018 12:18:39.572255  319485 cri.go:89] found id: ""
	I1018 12:18:39.572310  319485 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 12:18:39.585733  319485 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:39Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:18:39.585815  319485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:18:39.594298  319485 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 12:18:39.594319  319485 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 12:18:39.594367  319485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 12:18:39.604664  319485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:18:39.605663  319485 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-175371" does not appear in /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:39.606304  319485 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-5865/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-175371" cluster setting kubeconfig missing "embed-certs-175371" context setting]
	I1018 12:18:39.607392  319485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:39.609392  319485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 12:18:39.617900  319485 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 12:18:39.617934  319485 kubeadm.go:601] duration metric: took 23.608426ms to restartPrimaryControlPlane
	I1018 12:18:39.617944  319485 kubeadm.go:402] duration metric: took 88.372405ms to StartCluster
	I1018 12:18:39.617961  319485 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:39.618027  319485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:39.620424  319485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:39.620686  319485 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:18:39.620787  319485 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:18:39.620892  319485 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-175371"
	I1018 12:18:39.620905  319485 addons.go:69] Setting dashboard=true in profile "embed-certs-175371"
	I1018 12:18:39.620954  319485 addons.go:238] Setting addon dashboard=true in "embed-certs-175371"
	W1018 12:18:39.620966  319485 addons.go:247] addon dashboard should already be in state true
	I1018 12:18:39.621000  319485 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:18:39.621038  319485 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:39.620915  319485 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-175371"
	W1018 12:18:39.621060  319485 addons.go:247] addon storage-provisioner should already be in state true
	I1018 12:18:39.621089  319485 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:18:39.620920  319485 addons.go:69] Setting default-storageclass=true in profile "embed-certs-175371"
	I1018 12:18:39.621185  319485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-175371"
	I1018 12:18:39.621523  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:39.621548  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:39.621562  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:39.623582  319485 out.go:179] * Verifying Kubernetes components...
	I1018 12:18:39.624890  319485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:18:39.647395  319485 addons.go:238] Setting addon default-storageclass=true in "embed-certs-175371"
	W1018 12:18:39.647416  319485 addons.go:247] addon default-storageclass should already be in state true
	I1018 12:18:39.647444  319485 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:18:39.647878  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:39.649378  319485 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:18:39.649377  319485 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 12:18:39.650859  319485 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:18:39.650877  319485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:18:39.650935  319485 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 12:18:39.650953  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:39.652294  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 12:18:39.652313  319485 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 12:18:39.652366  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:39.685481  319485 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:18:39.685508  319485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:18:39.685565  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:39.688909  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:39.691698  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:39.715793  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:39.776976  319485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:18:39.796702  319485 node_ready.go:35] waiting up to 6m0s for node "embed-certs-175371" to be "Ready" ...
	I1018 12:18:39.810215  319485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:18:39.810840  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 12:18:39.810861  319485 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 12:18:39.827587  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 12:18:39.827617  319485 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 12:18:39.832984  319485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:18:39.846934  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 12:18:39.846963  319485 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 12:18:39.866940  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 12:18:39.866963  319485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 12:18:39.884653  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 12:18:39.884676  319485 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 12:18:39.899737  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 12:18:39.899797  319485 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 12:18:39.914273  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 12:18:39.914304  319485 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 12:18:39.928891  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 12:18:39.928922  319485 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 12:18:39.941986  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:18:39.942011  319485 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 12:18:39.956234  319485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:18:41.376829  319485 node_ready.go:49] node "embed-certs-175371" is "Ready"
	I1018 12:18:41.376867  319485 node_ready.go:38] duration metric: took 1.579990475s for node "embed-certs-175371" to be "Ready" ...
	I1018 12:18:41.376885  319485 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:18:41.376941  319485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:18:41.913233  319485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.102983393s)
	I1018 12:18:41.913329  319485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.08031124s)
	I1018 12:18:41.913460  319485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.957177067s)
	I1018 12:18:41.913484  319485 api_server.go:72] duration metric: took 2.292768638s to wait for apiserver process to appear ...
	I1018 12:18:41.913497  319485 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:18:41.913526  319485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 12:18:41.918402  319485 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-175371 addons enable metrics-server
	
	I1018 12:18:41.919631  319485 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:18:41.919655  319485 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:18:41.925471  319485 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1018 12:18:40.346078  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:42.347310  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:41.927054  319485 addons.go:514] duration metric: took 2.306294485s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 12:18:42.413938  319485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 12:18:42.418439  319485 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:18:42.418474  319485 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:18:42.913848  319485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 12:18:42.918735  319485 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 12:18:42.919687  319485 api_server.go:141] control plane version: v1.34.1
	I1018 12:18:42.919718  319485 api_server.go:131] duration metric: took 1.006210574s to wait for apiserver health ...
	I1018 12:18:42.919726  319485 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:18:42.923301  319485 system_pods.go:59] 8 kube-system pods found
	I1018 12:18:42.923341  319485 system_pods.go:61] "coredns-66bc5c9577-b6h9l" [bf0c7f4f-476e-4faf-9159-580059735927] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:18:42.923353  319485 system_pods.go:61] "etcd-embed-certs-175371" [78ddf662-3465-4bf6-8514-500ccc419f56] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:42.923364  319485 system_pods.go:61] "kindnet-dxw8r" [c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:18:42.923373  319485 system_pods.go:61] "kube-apiserver-embed-certs-175371" [4357b213-beda-4ed7-b5b7-8a7ee35900fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:42.923383  319485 system_pods.go:61] "kube-controller-manager-embed-certs-175371" [5f063dc0-4c2c-434c-a534-54e2ca90614f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:42.923397  319485 system_pods.go:61] "kube-proxy-t2x4c" [9d5ade84-59a3-4948-ba28-a6663bd749ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:18:42.923409  319485 system_pods.go:61] "kube-scheduler-embed-certs-175371" [24ee0c7e-121d-42ff-ac1c-ce69f7cc6511] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:42.923448  319485 system_pods.go:61] "storage-provisioner" [d598f5a5-5d3e-4ad8-9266-ea4fee4648c7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:18:42.923466  319485 system_pods.go:74] duration metric: took 3.733114ms to wait for pod list to return data ...
	I1018 12:18:42.923476  319485 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:18:42.926029  319485 default_sa.go:45] found service account: "default"
	I1018 12:18:42.926061  319485 default_sa.go:55] duration metric: took 2.577664ms for default service account to be created ...
	I1018 12:18:42.926074  319485 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:18:42.929022  319485 system_pods.go:86] 8 kube-system pods found
	I1018 12:18:42.929049  319485 system_pods.go:89] "coredns-66bc5c9577-b6h9l" [bf0c7f4f-476e-4faf-9159-580059735927] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:18:42.929057  319485 system_pods.go:89] "etcd-embed-certs-175371" [78ddf662-3465-4bf6-8514-500ccc419f56] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:42.929063  319485 system_pods.go:89] "kindnet-dxw8r" [c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:18:42.929069  319485 system_pods.go:89] "kube-apiserver-embed-certs-175371" [4357b213-beda-4ed7-b5b7-8a7ee35900fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:42.929074  319485 system_pods.go:89] "kube-controller-manager-embed-certs-175371" [5f063dc0-4c2c-434c-a534-54e2ca90614f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:42.929079  319485 system_pods.go:89] "kube-proxy-t2x4c" [9d5ade84-59a3-4948-ba28-a6663bd749ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:18:42.929084  319485 system_pods.go:89] "kube-scheduler-embed-certs-175371" [24ee0c7e-121d-42ff-ac1c-ce69f7cc6511] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:42.929088  319485 system_pods.go:89] "storage-provisioner" [d598f5a5-5d3e-4ad8-9266-ea4fee4648c7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:18:42.929095  319485 system_pods.go:126] duration metric: took 3.016302ms to wait for k8s-apps to be running ...
	I1018 12:18:42.929105  319485 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:18:42.929153  319485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:42.942149  319485 system_svc.go:56] duration metric: took 13.033259ms WaitForService to wait for kubelet
	I1018 12:18:42.942182  319485 kubeadm.go:586] duration metric: took 3.321467327s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:18:42.942204  319485 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:18:42.944896  319485 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:18:42.944917  319485 node_conditions.go:123] node cpu capacity is 8
	I1018 12:18:42.944942  319485 node_conditions.go:105] duration metric: took 2.731777ms to run NodePressure ...
	I1018 12:18:42.944955  319485 start.go:241] waiting for startup goroutines ...
	I1018 12:18:42.944969  319485 start.go:246] waiting for cluster config update ...
	I1018 12:18:42.945001  319485 start.go:255] writing updated cluster config ...
	I1018 12:18:42.945268  319485 ssh_runner.go:195] Run: rm -f paused
	I1018 12:18:42.949454  319485 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:42.952932  319485 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b6h9l" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:18:44.959171  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 12:18:11 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:11.616256588Z" level=info msg="Created container 7639427c91a82a37b0a5b9d91dc9de5ccbb5db91445889266a268aaf57c64ddb: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7gk7m/kubernetes-dashboard" id=c31d5d1b-21bd-4056-bd7e-2188389904bb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:11 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:11.616972315Z" level=info msg="Starting container: 7639427c91a82a37b0a5b9d91dc9de5ccbb5db91445889266a268aaf57c64ddb" id=09a1cd46-54af-45ed-b5cd-2dff48f524ed name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:18:11 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:11.619112027Z" level=info msg="Started container" PID=1725 containerID=7639427c91a82a37b0a5b9d91dc9de5ccbb5db91445889266a268aaf57c64ddb description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7gk7m/kubernetes-dashboard id=09a1cd46-54af-45ed-b5cd-2dff48f524ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f12c5c060827f15e66ad580061c6dccbc67100f3004cd56827514387e89910f
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.789966277Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ad1b2d6f-fb13-4a0c-bcd5-95a92af37edd name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.790960523Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5bce0974-6151-4fe7-a2c8-92289272e09d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.791955037Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=66f6eb97-1197-4432-96aa-d55522163295 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.792229211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.798234452Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.798461929Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b0a1d8543e432f19f9929b66f052cbf3d933b95ea7dc5801a148647b55fb1465/merged/etc/passwd: no such file or directory"
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.798609647Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b0a1d8543e432f19f9929b66f052cbf3d933b95ea7dc5801a148647b55fb1465/merged/etc/group: no such file or directory"
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.79898679Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.832015099Z" level=info msg="Created container 247925a32df258cd29376583f360c15f442b55a9f1a8b643d4538383ac9c74a7: kube-system/storage-provisioner/storage-provisioner" id=66f6eb97-1197-4432-96aa-d55522163295 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.832664287Z" level=info msg="Starting container: 247925a32df258cd29376583f360c15f442b55a9f1a8b643d4538383ac9c74a7" id=b9cc912f-0bb6-4621-a540-d4906337ee7a name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.834897806Z" level=info msg="Started container" PID=1749 containerID=247925a32df258cd29376583f360c15f442b55a9f1a8b643d4538383ac9c74a7 description=kube-system/storage-provisioner/storage-provisioner id=b9cc912f-0bb6-4621-a540-d4906337ee7a name=/runtime.v1.RuntimeService/StartContainer sandboxID=346c387bf6c228550bcc0d24af90172964bc889faa361401d51b3b7a151d650b
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.6738114Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8b2dcdeb-32e2-4559-97ae-c04770a486ce name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.675286063Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b7ecc69e-547a-4f4a-9bfc-1b6ae982990f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.676395649Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85/dashboard-metrics-scraper" id=6d068cee-d36f-4059-924d-5405a31dcbdb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.676701662Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.688040864Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.688693198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.728564016Z" level=info msg="Created container 8b3e716afde9f48058617565b8e95c5e8259830581a273cf2d765c1152eb3ffd: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85/dashboard-metrics-scraper" id=6d068cee-d36f-4059-924d-5405a31dcbdb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.730592675Z" level=info msg="Starting container: 8b3e716afde9f48058617565b8e95c5e8259830581a273cf2d765c1152eb3ffd" id=4c2f8470-9a08-4609-9fdf-e436eda0462c name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.733357372Z" level=info msg="Started container" PID=1765 containerID=8b3e716afde9f48058617565b8e95c5e8259830581a273cf2d765c1152eb3ffd description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85/dashboard-metrics-scraper id=4c2f8470-9a08-4609-9fdf-e436eda0462c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d90a407ff483c643969ead4caa6556f121c0ad5520de1dc3076beaadc68918af
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.805068241Z" level=info msg="Removing container: e42da0511b3f401feeb10b48e5ec8f7ff95c92fa590e6b79ffd56caa437209fc" id=f13cf10a-443b-4e08-aeb8-0184a50c050f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.816128566Z" level=info msg="Removed container e42da0511b3f401feeb10b48e5ec8f7ff95c92fa590e6b79ffd56caa437209fc: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85/dashboard-metrics-scraper" id=f13cf10a-443b-4e08-aeb8-0184a50c050f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	8b3e716afde9f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   d90a407ff483c       dashboard-metrics-scraper-5f989dc9cf-b8j85       kubernetes-dashboard
	247925a32df25       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   346c387bf6c22       storage-provisioner                              kube-system
	7639427c91a82       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   8f12c5c060827       kubernetes-dashboard-8694d4445c-7gk7m            kubernetes-dashboard
	d7cc7969f8959       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           53 seconds ago      Running             coredns                     0                   287bf5f53ebb3       coredns-5dd5756b68-s4wnq                         kube-system
	027011fa4fdb8       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   f3ea23a27e8fd       busybox                                          default
	1a759c1022fc6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   346c387bf6c22       storage-provisioner                              kube-system
	284392573f4ad       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           53 seconds ago      Running             kube-proxy                  0                   9f997237d8cd9       kube-proxy-tzlpd                                 kube-system
	698a48720393a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   c8e304c0de167       kindnet-g8pwk                                    kube-system
	c1618cf2491e6       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           57 seconds ago      Running             kube-apiserver              0                   9f10de74d1082       kube-apiserver-old-k8s-version-024443            kube-system
	b9fd7b97fe26a       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           57 seconds ago      Running             kube-scheduler              0                   458d42ebe5e93       kube-scheduler-old-k8s-version-024443            kube-system
	c664320629fb5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           57 seconds ago      Running             etcd                        0                   c2f81268dce80       etcd-old-k8s-version-024443                      kube-system
	cd847940cd839       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           57 seconds ago      Running             kube-controller-manager     0                   503ae8ca0b684       kube-controller-manager-old-k8s-version-024443   kube-system
	
	
	==> coredns [d7cc7969f8959a73ae35786fd5ff767a8bfa2ebbac51d066ef36cdfed10301be] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50130 - 25725 "HINFO IN 3914257451278979214.7315036615081347181. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015504149s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-024443
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-024443
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=old-k8s-version-024443
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_16_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:16:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-024443
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:18:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:18:23 +0000   Sat, 18 Oct 2025 12:16:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:18:23 +0000   Sat, 18 Oct 2025 12:16:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:18:23 +0000   Sat, 18 Oct 2025 12:16:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:18:23 +0000   Sat, 18 Oct 2025 12:17:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-024443
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                3a233bec-8fde-40ac-b97e-b54a8a6dbbef
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-s4wnq                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-old-k8s-version-024443                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-g8pwk                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-024443             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-024443    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-tzlpd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-024443             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-b8j85        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-7gk7m             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 110s                   kube-proxy       
	  Normal  Starting                 53s                    kube-proxy       
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-024443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-024443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-024443 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m4s                   kubelet          Node old-k8s-version-024443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m4s                   kubelet          Node old-k8s-version-024443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m4s                   kubelet          Node old-k8s-version-024443 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s                   node-controller  Node old-k8s-version-024443 event: Registered Node old-k8s-version-024443 in Controller
	  Normal  NodeReady                98s                    kubelet          Node old-k8s-version-024443 status is now: NodeReady
	  Normal  Starting                 58s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x9 over 58s)      kubelet          Node old-k8s-version-024443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)      kubelet          Node old-k8s-version-024443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x7 over 58s)      kubelet          Node old-k8s-version-024443 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                    node-controller  Node old-k8s-version-024443 event: Registered Node old-k8s-version-024443 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +11.948953] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 93 07 de 40 6d 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 2f a5 3a 37 fc 08 06
	[  +0.204454] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 4b 47 1f ce e5 08 06
	[Oct18 12:16] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 88 62 1b dd a7 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 f1 aa 42 b3 1d 08 06
	[  +0.000901] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +26.035563] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 9e 15 3f 0e e1 08 06
	[  +0.000631] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 55 46 ae a1 7f 08 06
	[  +2.492998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 63 10 7e 7b f1 08 06
	[  +0.001695] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	[ +18.118461] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e eb 77 72 c6 18 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	
	
	==> etcd [c664320629fb594f08d0b5b11b435430f4ed28eaed8d94b8f5952428aa171a2f] <==
	{"level":"info","ts":"2025-10-18T12:17:50.250991Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-18T12:17:50.251306Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T12:17:50.251393Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T12:17:50.251438Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T12:17:50.251504Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T12:17:50.251518Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T12:17:50.253274Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T12:17:50.253493Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-18T12:17:50.253543Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-18T12:17:50.253623Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T12:17:50.253649Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T12:17:51.941634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T12:17:51.94168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T12:17:51.941704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-18T12:17:51.94172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T12:17:51.941726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-18T12:17:51.941733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-18T12:17:51.941741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-18T12:17:51.943519Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-024443 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T12:17:51.943517Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T12:17:51.943542Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T12:17:51.943739Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T12:17:51.943799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T12:17:51.944852Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-18T12:17:51.944886Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:18:47 up  1:01,  0 user,  load average: 3.75, 4.04, 2.60
	Linux old-k8s-version-024443 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [698a48720393a674c29dfc41bbf1f15059de251c55cf7701f06cd21dd31b76d4] <==
	I1018 12:17:54.342652       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:17:54.343411       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 12:17:54.343612       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:17:54.343629       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:17:54.343651       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:17:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:17:54.602098       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:17:54.602130       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:17:54.602143       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:17:54.602281       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:17:54.943327       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:17:54.943361       1 metrics.go:72] Registering metrics
	I1018 12:17:54.943465       1 controller.go:711] "Syncing nftables rules"
	I1018 12:18:04.603876       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 12:18:04.603967       1 main.go:301] handling current node
	I1018 12:18:14.602636       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 12:18:14.602673       1 main.go:301] handling current node
	I1018 12:18:24.601862       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 12:18:24.601893       1 main.go:301] handling current node
	I1018 12:18:34.604861       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 12:18:34.604901       1 main.go:301] handling current node
	I1018 12:18:44.608152       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 12:18:44.608197       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c1618cf2491e60c5f264f84236c3e565212efb40b779ad4dfc51547e5f21be79] <==
	I1018 12:17:53.062403       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:17:53.108230       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 12:17:53.108291       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 12:17:53.108318       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 12:17:53.108584       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 12:17:53.109244       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1018 12:17:53.109370       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 12:17:53.109382       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 12:17:53.111817       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 12:17:53.111971       1 aggregator.go:166] initial CRD sync complete...
	I1018 12:17:53.111987       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 12:17:53.111994       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 12:17:53.112000       1 cache.go:39] Caches are synced for autoregister controller
	E1018 12:17:53.117000       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 12:17:54.023639       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:17:54.078325       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 12:17:54.190004       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 12:17:54.227465       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:17:54.238676       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:17:54.249154       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 12:17:54.294045       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.18.235"}
	I1018 12:17:54.314548       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.226.219"}
	I1018 12:18:05.671017       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:18:05.944504       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 12:18:06.093196       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [cd847940cd839a77a7dd6283540c50c9b5c0f1ec5b64bfe2ed49728cb0998923] <==
	I1018 12:18:05.949901       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1018 12:18:06.050274       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="458.738182ms"
	I1018 12:18:06.050408       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.924µs"
	I1018 12:18:06.051848       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-7gk7m"
	I1018 12:18:06.051957       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-b8j85"
	I1018 12:18:06.060417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="111.89032ms"
	I1018 12:18:06.060904       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="111.289795ms"
	I1018 12:18:06.068425       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="6.909189ms"
	I1018 12:18:06.068561       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.802µs"
	I1018 12:18:06.072115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.492µs"
	I1018 12:18:06.073055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.576981ms"
	I1018 12:18:06.073156       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.107µs"
	I1018 12:18:06.080944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.222µs"
	I1018 12:18:06.115089       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 12:18:06.127336       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 12:18:06.127373       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 12:18:08.757793       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.32µs"
	I1018 12:18:09.765064       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.016µs"
	I1018 12:18:10.773452       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.055µs"
	I1018 12:18:11.776458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.698857ms"
	I1018 12:18:11.776542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="39.132µs"
	I1018 12:18:28.816589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="106.376µs"
	I1018 12:18:30.609379       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.811123ms"
	I1018 12:18:30.609621       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.446µs"
	I1018 12:18:36.446932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="93.854µs"
	
	
	==> kube-proxy [284392573f4ad6f3703725c92028a746af8799850cd474e5b9d2167b610c0589] <==
	I1018 12:17:54.146276       1 server_others.go:69] "Using iptables proxy"
	I1018 12:17:54.162050       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1018 12:17:54.200488       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:17:54.205105       1 server_others.go:152] "Using iptables Proxier"
	I1018 12:17:54.205280       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 12:17:54.205299       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 12:17:54.205338       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 12:17:54.205677       1 server.go:846] "Version info" version="v1.28.0"
	I1018 12:17:54.205961       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:17:54.207042       1 config.go:188] "Starting service config controller"
	I1018 12:17:54.208476       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 12:17:54.208069       1 config.go:315] "Starting node config controller"
	I1018 12:17:54.208605       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 12:17:54.208096       1 config.go:97] "Starting endpoint slice config controller"
	I1018 12:17:54.208668       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 12:17:54.309092       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1018 12:17:54.309159       1 shared_informer.go:318] Caches are synced for node config
	I1018 12:17:54.309335       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [b9fd7b97fe26af7875425214d9a97dc3856195255cc6b76a7313c710605084a3] <==
	I1018 12:17:50.833235       1 serving.go:348] Generated self-signed cert in-memory
	I1018 12:17:53.097690       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1018 12:17:53.097725       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:17:53.103055       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1018 12:17:53.103143       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:17:53.103181       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1018 12:17:53.103200       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 12:17:53.103101       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 12:17:53.103308       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1018 12:17:53.104159       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1018 12:17:53.104243       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1018 12:17:53.204014       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 12:17:53.204016       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1018 12:17:53.204031       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 18 12:18:06 old-k8s-version-024443 kubelet[726]: I1018 12:18:06.060166     726 topology_manager.go:215] "Topology Admit Handler" podUID="daca9387-7b3a-4193-b10d-25e2c8a391dd" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-7gk7m"
	Oct 18 12:18:06 old-k8s-version-024443 kubelet[726]: I1018 12:18:06.226691     726 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/daca9387-7b3a-4193-b10d-25e2c8a391dd-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-7gk7m\" (UID: \"daca9387-7b3a-4193-b10d-25e2c8a391dd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7gk7m"
	Oct 18 12:18:06 old-k8s-version-024443 kubelet[726]: I1018 12:18:06.226750     726 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/be653a6e-5540-4a5c-a717-68e89ee18574-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-b8j85\" (UID: \"be653a6e-5540-4a5c-a717-68e89ee18574\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85"
	Oct 18 12:18:06 old-k8s-version-024443 kubelet[726]: I1018 12:18:06.226786     726 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hv22\" (UniqueName: \"kubernetes.io/projected/be653a6e-5540-4a5c-a717-68e89ee18574-kube-api-access-9hv22\") pod \"dashboard-metrics-scraper-5f989dc9cf-b8j85\" (UID: \"be653a6e-5540-4a5c-a717-68e89ee18574\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85"
	Oct 18 12:18:06 old-k8s-version-024443 kubelet[726]: I1018 12:18:06.226926     726 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmg6x\" (UniqueName: \"kubernetes.io/projected/daca9387-7b3a-4193-b10d-25e2c8a391dd-kube-api-access-xmg6x\") pod \"kubernetes-dashboard-8694d4445c-7gk7m\" (UID: \"daca9387-7b3a-4193-b10d-25e2c8a391dd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7gk7m"
	Oct 18 12:18:08 old-k8s-version-024443 kubelet[726]: I1018 12:18:08.743579     726 scope.go:117] "RemoveContainer" containerID="9c8e1225a05abdfbc00fc62b5bc0984915505d934949eeee0939613801fd9443"
	Oct 18 12:18:09 old-k8s-version-024443 kubelet[726]: I1018 12:18:09.747982     726 scope.go:117] "RemoveContainer" containerID="9c8e1225a05abdfbc00fc62b5bc0984915505d934949eeee0939613801fd9443"
	Oct 18 12:18:09 old-k8s-version-024443 kubelet[726]: I1018 12:18:09.748364     726 scope.go:117] "RemoveContainer" containerID="e42da0511b3f401feeb10b48e5ec8f7ff95c92fa590e6b79ffd56caa437209fc"
	Oct 18 12:18:09 old-k8s-version-024443 kubelet[726]: E1018 12:18:09.749128     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8j85_kubernetes-dashboard(be653a6e-5540-4a5c-a717-68e89ee18574)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85" podUID="be653a6e-5540-4a5c-a717-68e89ee18574"
	Oct 18 12:18:10 old-k8s-version-024443 kubelet[726]: I1018 12:18:10.754612     726 scope.go:117] "RemoveContainer" containerID="e42da0511b3f401feeb10b48e5ec8f7ff95c92fa590e6b79ffd56caa437209fc"
	Oct 18 12:18:10 old-k8s-version-024443 kubelet[726]: E1018 12:18:10.755009     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8j85_kubernetes-dashboard(be653a6e-5540-4a5c-a717-68e89ee18574)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85" podUID="be653a6e-5540-4a5c-a717-68e89ee18574"
	Oct 18 12:18:11 old-k8s-version-024443 kubelet[726]: I1018 12:18:11.769698     726 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7gk7m" podStartSLOduration=0.579229247 podCreationTimestamp="2025-10-18 12:18:06 +0000 UTC" firstStartedPulling="2025-10-18 12:18:06.383538914 +0000 UTC m=+16.806469921" lastFinishedPulling="2025-10-18 12:18:11.573946323 +0000 UTC m=+21.996877330" observedRunningTime="2025-10-18 12:18:11.769531951 +0000 UTC m=+22.192462964" watchObservedRunningTime="2025-10-18 12:18:11.769636656 +0000 UTC m=+22.192567671"
	Oct 18 12:18:16 old-k8s-version-024443 kubelet[726]: I1018 12:18:16.360196     726 scope.go:117] "RemoveContainer" containerID="e42da0511b3f401feeb10b48e5ec8f7ff95c92fa590e6b79ffd56caa437209fc"
	Oct 18 12:18:16 old-k8s-version-024443 kubelet[726]: E1018 12:18:16.360548     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8j85_kubernetes-dashboard(be653a6e-5540-4a5c-a717-68e89ee18574)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85" podUID="be653a6e-5540-4a5c-a717-68e89ee18574"
	Oct 18 12:18:24 old-k8s-version-024443 kubelet[726]: I1018 12:18:24.789401     726 scope.go:117] "RemoveContainer" containerID="1a759c1022fc648d15de94f7193598eb07b5a7f318b6e11d24a4702d3ec03b78"
	Oct 18 12:18:28 old-k8s-version-024443 kubelet[726]: I1018 12:18:28.673075     726 scope.go:117] "RemoveContainer" containerID="e42da0511b3f401feeb10b48e5ec8f7ff95c92fa590e6b79ffd56caa437209fc"
	Oct 18 12:18:28 old-k8s-version-024443 kubelet[726]: I1018 12:18:28.803451     726 scope.go:117] "RemoveContainer" containerID="e42da0511b3f401feeb10b48e5ec8f7ff95c92fa590e6b79ffd56caa437209fc"
	Oct 18 12:18:28 old-k8s-version-024443 kubelet[726]: I1018 12:18:28.803710     726 scope.go:117] "RemoveContainer" containerID="8b3e716afde9f48058617565b8e95c5e8259830581a273cf2d765c1152eb3ffd"
	Oct 18 12:18:28 old-k8s-version-024443 kubelet[726]: E1018 12:18:28.804132     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8j85_kubernetes-dashboard(be653a6e-5540-4a5c-a717-68e89ee18574)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85" podUID="be653a6e-5540-4a5c-a717-68e89ee18574"
	Oct 18 12:18:36 old-k8s-version-024443 kubelet[726]: I1018 12:18:36.360599     726 scope.go:117] "RemoveContainer" containerID="8b3e716afde9f48058617565b8e95c5e8259830581a273cf2d765c1152eb3ffd"
	Oct 18 12:18:36 old-k8s-version-024443 kubelet[726]: E1018 12:18:36.361036     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8j85_kubernetes-dashboard(be653a6e-5540-4a5c-a717-68e89ee18574)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85" podUID="be653a6e-5540-4a5c-a717-68e89ee18574"
	Oct 18 12:18:44 old-k8s-version-024443 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 12:18:44 old-k8s-version-024443 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 12:18:44 old-k8s-version-024443 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 12:18:44 old-k8s-version-024443 systemd[1]: kubelet.service: Consumed 1.610s CPU time.
	
	
	==> kubernetes-dashboard [7639427c91a82a37b0a5b9d91dc9de5ccbb5db91445889266a268aaf57c64ddb] <==
	2025/10/18 12:18:11 Starting overwatch
	2025/10/18 12:18:11 Using namespace: kubernetes-dashboard
	2025/10/18 12:18:11 Using in-cluster config to connect to apiserver
	2025/10/18 12:18:11 Using secret token for csrf signing
	2025/10/18 12:18:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 12:18:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 12:18:11 Successful initial request to the apiserver, version: v1.28.0
	2025/10/18 12:18:11 Generating JWE encryption key
	2025/10/18 12:18:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 12:18:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 12:18:11 Initializing JWE encryption key from synchronized object
	2025/10/18 12:18:11 Creating in-cluster Sidecar client
	2025/10/18 12:18:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 12:18:11 Serving insecurely on HTTP port: 9090
	2025/10/18 12:18:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1a759c1022fc648d15de94f7193598eb07b5a7f318b6e11d24a4702d3ec03b78] <==
	I1018 12:17:54.121104       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 12:18:24.127204       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [247925a32df258cd29376583f360c15f442b55a9f1a8b643d4538383ac9c74a7] <==
	I1018 12:18:24.848728       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 12:18:24.856818       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 12:18:24.856860       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 12:18:42.257407       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 12:18:42.257552       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ea2eab2-c98b-4fde-9bd6-441433386ca3", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-024443_cace15f0-1613-4a1e-96c3-83d339046a85 became leader
	I1018 12:18:42.257604       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-024443_cace15f0-1613-4a1e-96c3-83d339046a85!
	I1018 12:18:42.357808       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-024443_cace15f0-1613-4a1e-96c3-83d339046a85!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-024443 -n old-k8s-version-024443
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-024443 -n old-k8s-version-024443: exit status 2 (398.678758ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-024443 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-024443
helpers_test.go:243: (dbg) docker inspect old-k8s-version-024443:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570",
	        "Created": "2025-10-18T12:16:27.110733205Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309999,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:17:43.03153287Z",
	            "FinishedAt": "2025-10-18T12:17:41.87092059Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570/hosts",
	        "LogPath": "/var/lib/docker/containers/9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570/9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570-json.log",
	        "Name": "/old-k8s-version-024443",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-024443:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-024443",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b192bc9f9a724d060cf99a898e5d6bdc7a17f05ded9f632ad841f6fce6a3570",
	                "LowerDir": "/var/lib/docker/overlay2/7cecfc4c0113fa8f9c857128b1d2593c3e1dff65b374e90a3423a5349a0fc7ff-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7cecfc4c0113fa8f9c857128b1d2593c3e1dff65b374e90a3423a5349a0fc7ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7cecfc4c0113fa8f9c857128b1d2593c3e1dff65b374e90a3423a5349a0fc7ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7cecfc4c0113fa8f9c857128b1d2593c3e1dff65b374e90a3423a5349a0fc7ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-024443",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-024443/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-024443",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-024443",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-024443",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c4077dd60b5a23f9638f5f1d9db9ee26ce8f067c60547e3755b5892713d0be18",
	            "SandboxKey": "/var/run/docker/netns/c4077dd60b5a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-024443": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:3b:07:46:28:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "704be5e99155d09cbf122649ccef6bb6653fc58dfc14bb6d440e5291162e7e3c",
	                    "EndpointID": "15d4c018851341f8eb5a9c5dad47746ef36d41417a0c2849beeb5bacedb0c5c4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-024443",
	                        "9b192bc9f9a7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-024443 -n old-k8s-version-024443
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-024443 -n old-k8s-version-024443: exit status 2 (390.616084ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-024443 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-024443 logs -n 25: (1.51294322s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-376567 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ ssh     │ -p bridge-376567 sudo crio config                                                                                                                                                                                                             │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ delete  │ -p bridge-376567                                                                                                                                                                                                                              │ bridge-376567                │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ delete  │ -p disable-driver-mounts-200198                                                                                                                                                                                                               │ disable-driver-mounts-200198 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-024443 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p old-k8s-version-024443 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-406541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p no-preload-406541 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-024443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p old-k8s-version-024443 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable dashboard -p no-preload-406541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p no-preload-406541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-028309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-028309 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-175371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ stop    │ -p embed-certs-175371 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-028309 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-175371 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p embed-certs-175371 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ image   │ no-preload-406541 image list --format=json                                                                                                                                                                                                    │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p no-preload-406541 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ image   │ old-k8s-version-024443 image list --format=json                                                                                                                                                                                               │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p old-k8s-version-024443 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:18:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:18:30.700052  319485 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:18:30.700328  319485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:30.700338  319485 out.go:374] Setting ErrFile to fd 2...
	I1018 12:18:30.700342  319485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:30.700573  319485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:18:30.701112  319485 out.go:368] Setting JSON to false
	I1018 12:18:30.702451  319485 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3659,"bootTime":1760786252,"procs":428,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:18:30.702547  319485 start.go:141] virtualization: kvm guest
	I1018 12:18:30.704614  319485 out.go:179] * [embed-certs-175371] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:18:30.706016  319485 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:18:30.706038  319485 notify.go:220] Checking for updates...
	I1018 12:18:30.708920  319485 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:18:30.710890  319485 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:30.712258  319485 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:18:30.713409  319485 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:18:30.714965  319485 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:18:30.716835  319485 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:30.717456  319485 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:18:30.741640  319485 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:18:30.741748  319485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:30.802733  319485 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:18:30.790905861 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:30.802866  319485 docker.go:318] overlay module found
	I1018 12:18:30.805106  319485 out.go:179] * Using the docker driver based on existing profile
	W1018 12:18:26.415356  310517 pod_ready.go:104] pod "coredns-66bc5c9577-bwvrq" is not "Ready", error: <nil>
	W1018 12:18:28.908743  310517 pod_ready.go:104] pod "coredns-66bc5c9577-bwvrq" is not "Ready", error: <nil>
	I1018 12:18:30.410244  310517 pod_ready.go:94] pod "coredns-66bc5c9577-bwvrq" is "Ready"
	I1018 12:18:30.410272  310517 pod_ready.go:86] duration metric: took 33.006670577s for pod "coredns-66bc5c9577-bwvrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.413489  310517 pod_ready.go:83] waiting for pod "etcd-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.418087  310517 pod_ready.go:94] pod "etcd-no-preload-406541" is "Ready"
	I1018 12:18:30.418113  310517 pod_ready.go:86] duration metric: took 4.60176ms for pod "etcd-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.420752  310517 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.425914  310517 pod_ready.go:94] pod "kube-apiserver-no-preload-406541" is "Ready"
	I1018 12:18:30.425945  310517 pod_ready.go:86] duration metric: took 5.137183ms for pod "kube-apiserver-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.430423  310517 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.608129  310517 pod_ready.go:94] pod "kube-controller-manager-no-preload-406541" is "Ready"
	I1018 12:18:30.608164  310517 pod_ready.go:86] duration metric: took 177.709701ms for pod "kube-controller-manager-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.807461  310517 pod_ready.go:83] waiting for pod "kube-proxy-9vbmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.806468  319485 start.go:305] selected driver: docker
	I1018 12:18:30.806488  319485 start.go:925] validating driver "docker" against &{Name:embed-certs-175371 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:30.806613  319485 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:18:30.807410  319485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:30.867893  319485 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:18:30.856888749 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:30.868200  319485 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:18:30.868236  319485 cni.go:84] Creating CNI manager for ""
	I1018 12:18:30.868281  319485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:18:30.868319  319485 start.go:349] cluster config:
	{Name:embed-certs-175371 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:30.870215  319485 out.go:179] * Starting "embed-certs-175371" primary control-plane node in "embed-certs-175371" cluster
	I1018 12:18:30.871831  319485 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:18:30.873306  319485 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:18:30.874877  319485 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:30.874928  319485 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 12:18:30.874944  319485 cache.go:58] Caching tarball of preloaded images
	I1018 12:18:30.875010  319485 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:18:30.875066  319485 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:18:30.875078  319485 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:18:30.875220  319485 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/config.json ...
	I1018 12:18:30.899840  319485 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:18:30.899862  319485 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:18:30.899879  319485 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:18:30.899905  319485 start.go:360] acquireMachinesLock for embed-certs-175371: {Name:mk656d4acd5501b1836b6cdb3453deba417e2657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:18:30.899958  319485 start.go:364] duration metric: took 36.728µs to acquireMachinesLock for "embed-certs-175371"
	I1018 12:18:30.899976  319485 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:18:30.899983  319485 fix.go:54] fixHost starting: 
	I1018 12:18:30.900188  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:30.918592  319485 fix.go:112] recreateIfNeeded on embed-certs-175371: state=Stopped err=<nil>
	W1018 12:18:30.918622  319485 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:18:31.208253  310517 pod_ready.go:94] pod "kube-proxy-9vbmr" is "Ready"
	I1018 12:18:31.208285  310517 pod_ready.go:86] duration metric: took 400.799145ms for pod "kube-proxy-9vbmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.407677  310517 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.806754  310517 pod_ready.go:94] pod "kube-scheduler-no-preload-406541" is "Ready"
	I1018 12:18:31.806818  310517 pod_ready.go:86] duration metric: took 399.114489ms for pod "kube-scheduler-no-preload-406541" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.806829  310517 pod_ready.go:40] duration metric: took 34.407726613s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:31.854283  310517 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:18:31.855987  310517 out.go:179] * Done! kubectl is now configured to use "no-preload-406541" cluster and "default" namespace by default
	W1018 12:18:29.376596  309793 pod_ready.go:104] pod "coredns-5dd5756b68-s4wnq" is not "Ready", error: <nil>
	I1018 12:18:30.875552  309793 pod_ready.go:94] pod "coredns-5dd5756b68-s4wnq" is "Ready"
	I1018 12:18:30.875577  309793 pod_ready.go:86] duration metric: took 36.005408914s for pod "coredns-5dd5756b68-s4wnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.878359  309793 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.883038  309793 pod_ready.go:94] pod "etcd-old-k8s-version-024443" is "Ready"
	I1018 12:18:30.883061  309793 pod_ready.go:86] duration metric: took 4.681016ms for pod "etcd-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.886183  309793 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.890240  309793 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-024443" is "Ready"
	I1018 12:18:30.890262  309793 pod_ready.go:86] duration metric: took 4.059352ms for pod "kube-apiserver-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:30.893534  309793 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.074647  309793 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-024443" is "Ready"
	I1018 12:18:31.074685  309793 pod_ready.go:86] duration metric: took 181.128894ms for pod "kube-controller-manager-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.274861  309793 pod_ready.go:83] waiting for pod "kube-proxy-tzlpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.674522  309793 pod_ready.go:94] pod "kube-proxy-tzlpd" is "Ready"
	I1018 12:18:31.674555  309793 pod_ready.go:86] duration metric: took 399.668633ms for pod "kube-proxy-tzlpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:31.874734  309793 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:32.274153  309793 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-024443" is "Ready"
	I1018 12:18:32.274178  309793 pod_ready.go:86] duration metric: took 399.401101ms for pod "kube-scheduler-old-k8s-version-024443" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:32.274188  309793 pod_ready.go:40] duration metric: took 37.409550626s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:32.318706  309793 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1018 12:18:32.320699  309793 out.go:203] 
	W1018 12:18:32.322350  309793 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 12:18:32.323906  309793 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 12:18:32.325540  309793 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-024443" cluster and "default" namespace by default
	I1018 12:18:29.298582  317167 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 12:18:29.303739  317167 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:18:29.303786  317167 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:18:29.797387  317167 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1018 12:18:29.802331  317167 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1018 12:18:29.803460  317167 api_server.go:141] control plane version: v1.34.1
	I1018 12:18:29.803483  317167 api_server.go:131] duration metric: took 1.00630107s to wait for apiserver health ...
	I1018 12:18:29.803491  317167 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:18:29.807265  317167 system_pods.go:59] 8 kube-system pods found
	I1018 12:18:29.807303  317167 system_pods.go:61] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:18:29.807319  317167 system_pods.go:61] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:29.807327  317167 system_pods.go:61] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:18:29.807333  317167 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:29.807341  317167 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:29.807349  317167 system_pods.go:61] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:18:29.807368  317167 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:29.807380  317167 system_pods.go:61] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Running
	I1018 12:18:29.807389  317167 system_pods.go:74] duration metric: took 3.891153ms to wait for pod list to return data ...
	I1018 12:18:29.807401  317167 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:18:29.810242  317167 default_sa.go:45] found service account: "default"
	I1018 12:18:29.810296  317167 default_sa.go:55] duration metric: took 2.860617ms for default service account to be created ...
	I1018 12:18:29.810306  317167 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:18:29.813451  317167 system_pods.go:86] 8 kube-system pods found
	I1018 12:18:29.813483  317167 system_pods.go:89] "coredns-66bc5c9577-7qgqj" [ee994967-1cb7-4583-ba0d-debf8ccc08e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:18:29.813490  317167 system_pods.go:89] "etcd-default-k8s-diff-port-028309" [d2778ccc-443c-4462-8530-741269f1746d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:29.813495  317167 system_pods.go:89] "kindnet-hbfgg" [672043e3-34ce-4800-8142-07ba221b21bc] Running
	I1018 12:18:29.813500  317167 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-028309" [81761029-9afd-461d-89b1-5b2f32e39f06] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:29.813506  317167 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-028309" [d6e9f1e2-111d-4f19-9b8e-10d07c079a9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:29.813509  317167 system_pods.go:89] "kube-proxy-bffkr" [d988f171-de9d-485c-b4db-67222e30fc25] Running
	I1018 12:18:29.813514  317167 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-028309" [53f9e280-a87d-4f65-b3b6-c94c2ef7cf9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:29.813520  317167 system_pods.go:89] "storage-provisioner" [8a70ca43-431c-461f-bac2-f916aa44de50] Running
	I1018 12:18:29.813527  317167 system_pods.go:126] duration metric: took 3.216525ms to wait for k8s-apps to be running ...
	I1018 12:18:29.813536  317167 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:18:29.813576  317167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:29.827054  317167 system_svc.go:56] duration metric: took 13.51026ms WaitForService to wait for kubelet
	I1018 12:18:29.827080  317167 kubeadm.go:586] duration metric: took 3.447871394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:18:29.827097  317167 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:18:29.830363  317167 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:18:29.830389  317167 node_conditions.go:123] node cpu capacity is 8
	I1018 12:18:29.830401  317167 node_conditions.go:105] duration metric: took 3.29887ms to run NodePressure ...
	I1018 12:18:29.830412  317167 start.go:241] waiting for startup goroutines ...
	I1018 12:18:29.830418  317167 start.go:246] waiting for cluster config update ...
	I1018 12:18:29.830429  317167 start.go:255] writing updated cluster config ...
	I1018 12:18:29.830727  317167 ssh_runner.go:195] Run: rm -f paused
	I1018 12:18:29.835232  317167 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:29.839676  317167 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7qgqj" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:18:31.844958  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:33.845498  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:30.921314  319485 out.go:252] * Restarting existing docker container for "embed-certs-175371" ...
	I1018 12:18:30.921390  319485 cli_runner.go:164] Run: docker start embed-certs-175371
	I1018 12:18:31.169483  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:31.188693  319485 kic.go:430] container "embed-certs-175371" state is running.
	I1018 12:18:31.189103  319485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-175371
	I1018 12:18:31.209362  319485 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/config.json ...
	I1018 12:18:31.209641  319485 machine.go:93] provisionDockerMachine start ...
	I1018 12:18:31.209725  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:31.229147  319485 main.go:141] libmachine: Using SSH client type: native
	I1018 12:18:31.229379  319485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 12:18:31.229390  319485 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:18:31.229993  319485 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36872->127.0.0.1:33123: read: connection reset by peer
	I1018 12:18:34.383983  319485 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-175371
	
	I1018 12:18:34.384015  319485 ubuntu.go:182] provisioning hostname "embed-certs-175371"
	I1018 12:18:34.384079  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:34.407484  319485 main.go:141] libmachine: Using SSH client type: native
	I1018 12:18:34.407828  319485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 12:18:34.407850  319485 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-175371 && echo "embed-certs-175371" | sudo tee /etc/hostname
	I1018 12:18:34.571542  319485 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-175371
	
	I1018 12:18:34.571633  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:34.593919  319485 main.go:141] libmachine: Using SSH client type: native
	I1018 12:18:34.594233  319485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 12:18:34.594268  319485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-175371' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-175371/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-175371' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:18:34.745131  319485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:18:34.745165  319485 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:18:34.745187  319485 ubuntu.go:190] setting up certificates
	I1018 12:18:34.745200  319485 provision.go:84] configureAuth start
	I1018 12:18:34.745288  319485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-175371
	I1018 12:18:34.769316  319485 provision.go:143] copyHostCerts
	I1018 12:18:34.769395  319485 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:18:34.769421  319485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:18:34.769499  319485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:18:34.769623  319485 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:18:34.769630  319485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:18:34.769673  319485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:18:34.769842  319485 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:18:34.769853  319485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:18:34.769895  319485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:18:34.769991  319485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.embed-certs-175371 san=[127.0.0.1 192.168.76.2 embed-certs-175371 localhost minikube]
	I1018 12:18:35.347148  319485 provision.go:177] copyRemoteCerts
	I1018 12:18:35.347208  319485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:18:35.347243  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:35.368711  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:35.475696  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:18:35.507103  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 12:18:35.533969  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 12:18:35.562565  319485 provision.go:87] duration metric: took 817.346845ms to configureAuth
	I1018 12:18:35.562597  319485 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:18:35.562839  319485 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:35.562989  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:35.590077  319485 main.go:141] libmachine: Using SSH client type: native
	I1018 12:18:35.590320  319485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1018 12:18:35.590341  319485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:18:36.705988  319485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:18:36.706031  319485 machine.go:96] duration metric: took 5.49637009s to provisionDockerMachine
	I1018 12:18:36.706047  319485 start.go:293] postStartSetup for "embed-certs-175371" (driver="docker")
	I1018 12:18:36.706060  319485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:18:36.706128  319485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:18:36.706190  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:36.727476  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:36.830826  319485 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:18:36.835533  319485 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:18:36.835569  319485 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:18:36.835584  319485 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:18:36.835636  319485 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:18:36.835707  319485 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:18:36.835829  319485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:18:36.846005  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:18:36.869811  319485 start.go:296] duration metric: took 163.746336ms for postStartSetup
	I1018 12:18:36.869902  319485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:18:36.869946  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:36.893357  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:36.997968  319485 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:18:37.004253  319485 fix.go:56] duration metric: took 6.104260841s for fixHost
	I1018 12:18:37.004285  319485 start.go:83] releasing machines lock for "embed-certs-175371", held for 6.104316695s
	I1018 12:18:37.004355  319485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-175371
	I1018 12:18:37.029349  319485 ssh_runner.go:195] Run: cat /version.json
	I1018 12:18:37.029412  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:37.029566  319485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:18:37.029633  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:37.054331  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:37.058158  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:37.158913  319485 ssh_runner.go:195] Run: systemctl --version
	I1018 12:18:37.235612  319485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:18:37.281675  319485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:18:37.287892  319485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:18:37.287969  319485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:18:37.298848  319485 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:18:37.298875  319485 start.go:495] detecting cgroup driver to use...
	I1018 12:18:37.298911  319485 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:18:37.298960  319485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:18:37.318507  319485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:18:37.335843  319485 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:18:37.335916  319485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:18:37.357159  319485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:18:37.373241  319485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:18:37.464197  319485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:18:37.557992  319485 docker.go:234] disabling docker service ...
	I1018 12:18:37.558064  319485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:18:37.573855  319485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:18:37.587606  319485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:18:37.677046  319485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:18:37.786485  319485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:18:37.800125  319485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:18:37.814639  319485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:18:37.814703  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.823696  319485 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:18:37.823802  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.833404  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.843440  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.852880  319485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:18:37.861252  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.870194  319485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.878686  319485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:18:37.887388  319485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:18:37.894731  319485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:18:37.902146  319485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:18:37.980625  319485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:18:38.435447  319485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:18:38.435521  319485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:18:38.439678  319485 start.go:563] Will wait 60s for crictl version
	I1018 12:18:38.439734  319485 ssh_runner.go:195] Run: which crictl
	I1018 12:18:38.443262  319485 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:18:38.467148  319485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:18:38.467213  319485 ssh_runner.go:195] Run: crio --version
	I1018 12:18:38.495216  319485 ssh_runner.go:195] Run: crio --version
	I1018 12:18:38.525571  319485 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1018 12:18:35.846564  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:38.345142  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:38.527068  319485 cli_runner.go:164] Run: docker network inspect embed-certs-175371 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:18:38.546516  319485 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 12:18:38.550993  319485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:18:38.561695  319485 kubeadm.go:883] updating cluster {Name:embed-certs-175371 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:18:38.561845  319485 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:38.561901  319485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:18:38.598535  319485 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:18:38.598563  319485 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:18:38.598618  319485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:18:38.630421  319485 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:18:38.630442  319485 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:18:38.630450  319485 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 12:18:38.630539  319485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-175371 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:18:38.630598  319485 ssh_runner.go:195] Run: crio config
	I1018 12:18:38.679497  319485 cni.go:84] Creating CNI manager for ""
	I1018 12:18:38.679521  319485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:18:38.679539  319485 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:18:38.679558  319485 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-175371 NodeName:embed-certs-175371 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:18:38.679684  319485 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-175371"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:18:38.679753  319485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:18:38.689079  319485 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:18:38.689144  319485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:18:38.697752  319485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 12:18:38.712315  319485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:18:38.726955  319485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 12:18:38.742413  319485 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:18:38.747169  319485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:18:38.758198  319485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:18:38.854804  319485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:18:38.876145  319485 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371 for IP: 192.168.76.2
	I1018 12:18:38.876167  319485 certs.go:195] generating shared ca certs ...
	I1018 12:18:38.876187  319485 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:38.876358  319485 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:18:38.876406  319485 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:18:38.876416  319485 certs.go:257] generating profile certs ...
	I1018 12:18:38.876507  319485 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/client.key
	I1018 12:18:38.876562  319485 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/apiserver.key.760612f0
	I1018 12:18:38.876613  319485 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/proxy-client.key
	I1018 12:18:38.876718  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem (1338 bytes)
	W1018 12:18:38.876744  319485 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360_empty.pem, impossibly tiny 0 bytes
	I1018 12:18:38.876751  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:18:38.876795  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:18:38.876824  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:18:38.876845  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 12:18:38.876882  319485 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:18:38.877407  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:18:38.896628  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:18:38.916658  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:18:38.936639  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:18:38.960966  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 12:18:38.980170  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:18:38.997882  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:18:39.015725  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/embed-certs-175371/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 12:18:39.032805  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /usr/share/ca-certificates/93602.pem (1708 bytes)
	I1018 12:18:39.049790  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:18:39.068080  319485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem --> /usr/share/ca-certificates/9360.pem (1338 bytes)
	I1018 12:18:39.086062  319485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:18:39.098810  319485 ssh_runner.go:195] Run: openssl version
	I1018 12:18:39.105009  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:18:39.113777  319485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:18:39.117712  319485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:18:39.117797  319485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:18:39.153127  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:18:39.162168  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9360.pem && ln -fs /usr/share/ca-certificates/9360.pem /etc/ssl/certs/9360.pem"
	I1018 12:18:39.171385  319485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9360.pem
	I1018 12:18:39.175469  319485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9360.pem
	I1018 12:18:39.175546  319485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9360.pem
	I1018 12:18:39.210362  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9360.pem /etc/ssl/certs/51391683.0"
	I1018 12:18:39.218971  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93602.pem && ln -fs /usr/share/ca-certificates/93602.pem /etc/ssl/certs/93602.pem"
	I1018 12:18:39.229154  319485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93602.pem
	I1018 12:18:39.233188  319485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/93602.pem
	I1018 12:18:39.233248  319485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93602.pem
	I1018 12:18:39.268526  319485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93602.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:18:39.276871  319485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:18:39.280846  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:18:39.315107  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:18:39.350704  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:18:39.387775  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:18:39.435187  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:18:39.475299  319485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:18:39.529584  319485 kubeadm.go:400] StartCluster: {Name:embed-certs-175371 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-175371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:39.529660  319485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:18:39.529707  319485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:18:39.572206  319485 cri.go:89] found id: "7eed71db702f71ba8ac1b3a4f95bf0e94d637c0237e59764412e0610aff6eddd"
	I1018 12:18:39.572238  319485 cri.go:89] found id: "8b43d4c98eba66467fa5b9aa2bd7f75a53d098d4dc11c9ca9578904769346b5e"
	I1018 12:18:39.572245  319485 cri.go:89] found id: "d82c539cae49915538e61bf60b7ade17e61db3edc660d10570b58552a6175d40"
	I1018 12:18:39.572250  319485 cri.go:89] found id: "a474582c739fed0fe5717b996a3fc2e3a1f0f913711f6e7f996ecc56104a314f"
	I1018 12:18:39.572255  319485 cri.go:89] found id: ""
	I1018 12:18:39.572310  319485 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 12:18:39.585733  319485 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:39Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:18:39.585815  319485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:18:39.594298  319485 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 12:18:39.594319  319485 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 12:18:39.594367  319485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 12:18:39.604664  319485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:18:39.605663  319485 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-175371" does not appear in /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:39.606304  319485 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-5865/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-175371" cluster setting kubeconfig missing "embed-certs-175371" context setting]
	I1018 12:18:39.607392  319485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:39.609392  319485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 12:18:39.617900  319485 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 12:18:39.617934  319485 kubeadm.go:601] duration metric: took 23.608426ms to restartPrimaryControlPlane
	I1018 12:18:39.617944  319485 kubeadm.go:402] duration metric: took 88.372405ms to StartCluster
	I1018 12:18:39.617961  319485 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:39.618027  319485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:39.620424  319485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:39.620686  319485 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:18:39.620787  319485 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:18:39.620892  319485 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-175371"
	I1018 12:18:39.620905  319485 addons.go:69] Setting dashboard=true in profile "embed-certs-175371"
	I1018 12:18:39.620954  319485 addons.go:238] Setting addon dashboard=true in "embed-certs-175371"
	W1018 12:18:39.620966  319485 addons.go:247] addon dashboard should already be in state true
	I1018 12:18:39.621000  319485 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:18:39.621038  319485 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:39.620915  319485 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-175371"
	W1018 12:18:39.621060  319485 addons.go:247] addon storage-provisioner should already be in state true
	I1018 12:18:39.621089  319485 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:18:39.620920  319485 addons.go:69] Setting default-storageclass=true in profile "embed-certs-175371"
	I1018 12:18:39.621185  319485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-175371"
	I1018 12:18:39.621523  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:39.621548  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:39.621562  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:39.623582  319485 out.go:179] * Verifying Kubernetes components...
	I1018 12:18:39.624890  319485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:18:39.647395  319485 addons.go:238] Setting addon default-storageclass=true in "embed-certs-175371"
	W1018 12:18:39.647416  319485 addons.go:247] addon default-storageclass should already be in state true
	I1018 12:18:39.647444  319485 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:18:39.647878  319485 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:18:39.649378  319485 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:18:39.649377  319485 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 12:18:39.650859  319485 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:18:39.650877  319485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:18:39.650935  319485 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 12:18:39.650953  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:39.652294  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 12:18:39.652313  319485 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 12:18:39.652366  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:39.685481  319485 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:18:39.685508  319485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:18:39.685565  319485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:18:39.688909  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:39.691698  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:39.715793  319485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:18:39.776976  319485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:18:39.796702  319485 node_ready.go:35] waiting up to 6m0s for node "embed-certs-175371" to be "Ready" ...
	I1018 12:18:39.810215  319485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:18:39.810840  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 12:18:39.810861  319485 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 12:18:39.827587  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 12:18:39.827617  319485 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 12:18:39.832984  319485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:18:39.846934  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 12:18:39.846963  319485 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 12:18:39.866940  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 12:18:39.866963  319485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 12:18:39.884653  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 12:18:39.884676  319485 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 12:18:39.899737  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 12:18:39.899797  319485 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 12:18:39.914273  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 12:18:39.914304  319485 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 12:18:39.928891  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 12:18:39.928922  319485 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 12:18:39.941986  319485 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:18:39.942011  319485 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 12:18:39.956234  319485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:18:41.376829  319485 node_ready.go:49] node "embed-certs-175371" is "Ready"
	I1018 12:18:41.376867  319485 node_ready.go:38] duration metric: took 1.579990475s for node "embed-certs-175371" to be "Ready" ...
	I1018 12:18:41.376885  319485 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:18:41.376941  319485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:18:41.913233  319485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.102983393s)
	I1018 12:18:41.913329  319485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.08031124s)
	I1018 12:18:41.913460  319485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.957177067s)
	I1018 12:18:41.913484  319485 api_server.go:72] duration metric: took 2.292768638s to wait for apiserver process to appear ...
	I1018 12:18:41.913497  319485 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:18:41.913526  319485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 12:18:41.918402  319485 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-175371 addons enable metrics-server
	
	I1018 12:18:41.919631  319485 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:18:41.919655  319485 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:18:41.925471  319485 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1018 12:18:40.346078  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:42.347310  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:41.927054  319485 addons.go:514] duration metric: took 2.306294485s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 12:18:42.413938  319485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 12:18:42.418439  319485 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:18:42.418474  319485 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:18:42.913848  319485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 12:18:42.918735  319485 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 12:18:42.919687  319485 api_server.go:141] control plane version: v1.34.1
	I1018 12:18:42.919718  319485 api_server.go:131] duration metric: took 1.006210574s to wait for apiserver health ...
	I1018 12:18:42.919726  319485 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:18:42.923301  319485 system_pods.go:59] 8 kube-system pods found
	I1018 12:18:42.923341  319485 system_pods.go:61] "coredns-66bc5c9577-b6h9l" [bf0c7f4f-476e-4faf-9159-580059735927] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:18:42.923353  319485 system_pods.go:61] "etcd-embed-certs-175371" [78ddf662-3465-4bf6-8514-500ccc419f56] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:42.923364  319485 system_pods.go:61] "kindnet-dxw8r" [c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:18:42.923373  319485 system_pods.go:61] "kube-apiserver-embed-certs-175371" [4357b213-beda-4ed7-b5b7-8a7ee35900fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:42.923383  319485 system_pods.go:61] "kube-controller-manager-embed-certs-175371" [5f063dc0-4c2c-434c-a534-54e2ca90614f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:42.923397  319485 system_pods.go:61] "kube-proxy-t2x4c" [9d5ade84-59a3-4948-ba28-a6663bd749ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:18:42.923409  319485 system_pods.go:61] "kube-scheduler-embed-certs-175371" [24ee0c7e-121d-42ff-ac1c-ce69f7cc6511] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:42.923448  319485 system_pods.go:61] "storage-provisioner" [d598f5a5-5d3e-4ad8-9266-ea4fee4648c7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:18:42.923466  319485 system_pods.go:74] duration metric: took 3.733114ms to wait for pod list to return data ...
	I1018 12:18:42.923476  319485 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:18:42.926029  319485 default_sa.go:45] found service account: "default"
	I1018 12:18:42.926061  319485 default_sa.go:55] duration metric: took 2.577664ms for default service account to be created ...
	I1018 12:18:42.926074  319485 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:18:42.929022  319485 system_pods.go:86] 8 kube-system pods found
	I1018 12:18:42.929049  319485 system_pods.go:89] "coredns-66bc5c9577-b6h9l" [bf0c7f4f-476e-4faf-9159-580059735927] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:18:42.929057  319485 system_pods.go:89] "etcd-embed-certs-175371" [78ddf662-3465-4bf6-8514-500ccc419f56] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:42.929063  319485 system_pods.go:89] "kindnet-dxw8r" [c2fd96d1-3e9e-4a3f-b8a7-7214e6bd79da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:18:42.929069  319485 system_pods.go:89] "kube-apiserver-embed-certs-175371" [4357b213-beda-4ed7-b5b7-8a7ee35900fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:42.929074  319485 system_pods.go:89] "kube-controller-manager-embed-certs-175371" [5f063dc0-4c2c-434c-a534-54e2ca90614f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:42.929079  319485 system_pods.go:89] "kube-proxy-t2x4c" [9d5ade84-59a3-4948-ba28-a6663bd749ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:18:42.929084  319485 system_pods.go:89] "kube-scheduler-embed-certs-175371" [24ee0c7e-121d-42ff-ac1c-ce69f7cc6511] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:42.929088  319485 system_pods.go:89] "storage-provisioner" [d598f5a5-5d3e-4ad8-9266-ea4fee4648c7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:18:42.929095  319485 system_pods.go:126] duration metric: took 3.016302ms to wait for k8s-apps to be running ...
	I1018 12:18:42.929105  319485 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:18:42.929153  319485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:42.942149  319485 system_svc.go:56] duration metric: took 13.033259ms WaitForService to wait for kubelet
	I1018 12:18:42.942182  319485 kubeadm.go:586] duration metric: took 3.321467327s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:18:42.942204  319485 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:18:42.944896  319485 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:18:42.944917  319485 node_conditions.go:123] node cpu capacity is 8
	I1018 12:18:42.944942  319485 node_conditions.go:105] duration metric: took 2.731777ms to run NodePressure ...
	I1018 12:18:42.944955  319485 start.go:241] waiting for startup goroutines ...
	I1018 12:18:42.944969  319485 start.go:246] waiting for cluster config update ...
	I1018 12:18:42.945001  319485 start.go:255] writing updated cluster config ...
	I1018 12:18:42.945268  319485 ssh_runner.go:195] Run: rm -f paused
	I1018 12:18:42.949454  319485 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:42.952932  319485 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b6h9l" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:18:44.959171  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 12:18:11 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:11.616256588Z" level=info msg="Created container 7639427c91a82a37b0a5b9d91dc9de5ccbb5db91445889266a268aaf57c64ddb: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7gk7m/kubernetes-dashboard" id=c31d5d1b-21bd-4056-bd7e-2188389904bb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:11 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:11.616972315Z" level=info msg="Starting container: 7639427c91a82a37b0a5b9d91dc9de5ccbb5db91445889266a268aaf57c64ddb" id=09a1cd46-54af-45ed-b5cd-2dff48f524ed name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:18:11 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:11.619112027Z" level=info msg="Started container" PID=1725 containerID=7639427c91a82a37b0a5b9d91dc9de5ccbb5db91445889266a268aaf57c64ddb description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7gk7m/kubernetes-dashboard id=09a1cd46-54af-45ed-b5cd-2dff48f524ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f12c5c060827f15e66ad580061c6dccbc67100f3004cd56827514387e89910f
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.789966277Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ad1b2d6f-fb13-4a0c-bcd5-95a92af37edd name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.790960523Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5bce0974-6151-4fe7-a2c8-92289272e09d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.791955037Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=66f6eb97-1197-4432-96aa-d55522163295 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.792229211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.798234452Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.798461929Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b0a1d8543e432f19f9929b66f052cbf3d933b95ea7dc5801a148647b55fb1465/merged/etc/passwd: no such file or directory"
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.798609647Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b0a1d8543e432f19f9929b66f052cbf3d933b95ea7dc5801a148647b55fb1465/merged/etc/group: no such file or directory"
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.79898679Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.832015099Z" level=info msg="Created container 247925a32df258cd29376583f360c15f442b55a9f1a8b643d4538383ac9c74a7: kube-system/storage-provisioner/storage-provisioner" id=66f6eb97-1197-4432-96aa-d55522163295 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.832664287Z" level=info msg="Starting container: 247925a32df258cd29376583f360c15f442b55a9f1a8b643d4538383ac9c74a7" id=b9cc912f-0bb6-4621-a540-d4906337ee7a name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:18:24 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:24.834897806Z" level=info msg="Started container" PID=1749 containerID=247925a32df258cd29376583f360c15f442b55a9f1a8b643d4538383ac9c74a7 description=kube-system/storage-provisioner/storage-provisioner id=b9cc912f-0bb6-4621-a540-d4906337ee7a name=/runtime.v1.RuntimeService/StartContainer sandboxID=346c387bf6c228550bcc0d24af90172964bc889faa361401d51b3b7a151d650b
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.6738114Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8b2dcdeb-32e2-4559-97ae-c04770a486ce name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.675286063Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b7ecc69e-547a-4f4a-9bfc-1b6ae982990f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.676395649Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85/dashboard-metrics-scraper" id=6d068cee-d36f-4059-924d-5405a31dcbdb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.676701662Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.688040864Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.688693198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.728564016Z" level=info msg="Created container 8b3e716afde9f48058617565b8e95c5e8259830581a273cf2d765c1152eb3ffd: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85/dashboard-metrics-scraper" id=6d068cee-d36f-4059-924d-5405a31dcbdb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.730592675Z" level=info msg="Starting container: 8b3e716afde9f48058617565b8e95c5e8259830581a273cf2d765c1152eb3ffd" id=4c2f8470-9a08-4609-9fdf-e436eda0462c name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.733357372Z" level=info msg="Started container" PID=1765 containerID=8b3e716afde9f48058617565b8e95c5e8259830581a273cf2d765c1152eb3ffd description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85/dashboard-metrics-scraper id=4c2f8470-9a08-4609-9fdf-e436eda0462c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d90a407ff483c643969ead4caa6556f121c0ad5520de1dc3076beaadc68918af
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.805068241Z" level=info msg="Removing container: e42da0511b3f401feeb10b48e5ec8f7ff95c92fa590e6b79ffd56caa437209fc" id=f13cf10a-443b-4e08-aeb8-0184a50c050f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:18:28 old-k8s-version-024443 crio[567]: time="2025-10-18T12:18:28.816128566Z" level=info msg="Removed container e42da0511b3f401feeb10b48e5ec8f7ff95c92fa590e6b79ffd56caa437209fc: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85/dashboard-metrics-scraper" id=f13cf10a-443b-4e08-aeb8-0184a50c050f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	8b3e716afde9f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   d90a407ff483c       dashboard-metrics-scraper-5f989dc9cf-b8j85       kubernetes-dashboard
	247925a32df25       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   346c387bf6c22       storage-provisioner                              kube-system
	7639427c91a82       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   8f12c5c060827       kubernetes-dashboard-8694d4445c-7gk7m            kubernetes-dashboard
	d7cc7969f8959       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           55 seconds ago      Running             coredns                     0                   287bf5f53ebb3       coredns-5dd5756b68-s4wnq                         kube-system
	027011fa4fdb8       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   f3ea23a27e8fd       busybox                                          default
	1a759c1022fc6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   346c387bf6c22       storage-provisioner                              kube-system
	284392573f4ad       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           55 seconds ago      Running             kube-proxy                  0                   9f997237d8cd9       kube-proxy-tzlpd                                 kube-system
	698a48720393a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   c8e304c0de167       kindnet-g8pwk                                    kube-system
	c1618cf2491e6       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           59 seconds ago      Running             kube-apiserver              0                   9f10de74d1082       kube-apiserver-old-k8s-version-024443            kube-system
	b9fd7b97fe26a       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           59 seconds ago      Running             kube-scheduler              0                   458d42ebe5e93       kube-scheduler-old-k8s-version-024443            kube-system
	c664320629fb5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           59 seconds ago      Running             etcd                        0                   c2f81268dce80       etcd-old-k8s-version-024443                      kube-system
	cd847940cd839       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           59 seconds ago      Running             kube-controller-manager     0                   503ae8ca0b684       kube-controller-manager-old-k8s-version-024443   kube-system
	
	
	==> coredns [d7cc7969f8959a73ae35786fd5ff767a8bfa2ebbac51d066ef36cdfed10301be] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50130 - 25725 "HINFO IN 3914257451278979214.7315036615081347181. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015504149s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-024443
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-024443
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=old-k8s-version-024443
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_16_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:16:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-024443
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:18:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:18:23 +0000   Sat, 18 Oct 2025 12:16:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:18:23 +0000   Sat, 18 Oct 2025 12:16:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:18:23 +0000   Sat, 18 Oct 2025 12:16:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:18:23 +0000   Sat, 18 Oct 2025 12:17:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-024443
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                3a233bec-8fde-40ac-b97e-b54a8a6dbbef
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-s4wnq                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-old-k8s-version-024443                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m6s
	  kube-system                 kindnet-g8pwk                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-old-k8s-version-024443             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-old-k8s-version-024443    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-tzlpd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-old-k8s-version-024443             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-b8j85        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-7gk7m             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 113s                   kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  Starting                 2m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-024443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-024443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-024443 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m6s                   kubelet          Node old-k8s-version-024443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m6s                   kubelet          Node old-k8s-version-024443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m6s                   kubelet          Node old-k8s-version-024443 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           115s                   node-controller  Node old-k8s-version-024443 event: Registered Node old-k8s-version-024443 in Controller
	  Normal  NodeReady                100s                   kubelet          Node old-k8s-version-024443 status is now: NodeReady
	  Normal  Starting                 60s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x9 over 60s)      kubelet          Node old-k8s-version-024443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)      kubelet          Node old-k8s-version-024443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x7 over 60s)      kubelet          Node old-k8s-version-024443 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                    node-controller  Node old-k8s-version-024443 event: Registered Node old-k8s-version-024443 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +11.948953] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 93 07 de 40 6d 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 2f a5 3a 37 fc 08 06
	[  +0.204454] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 4b 47 1f ce e5 08 06
	[Oct18 12:16] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 88 62 1b dd a7 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 f1 aa 42 b3 1d 08 06
	[  +0.000901] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +26.035563] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 9e 15 3f 0e e1 08 06
	[  +0.000631] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 55 46 ae a1 7f 08 06
	[  +2.492998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 63 10 7e 7b f1 08 06
	[  +0.001695] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	[ +18.118461] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e eb 77 72 c6 18 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	
	
	==> etcd [c664320629fb594f08d0b5b11b435430f4ed28eaed8d94b8f5952428aa171a2f] <==
	{"level":"info","ts":"2025-10-18T12:17:50.250991Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-18T12:17:50.251306Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T12:17:50.251393Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T12:17:50.251438Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T12:17:50.251504Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T12:17:50.251518Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T12:17:50.253274Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T12:17:50.253493Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-18T12:17:50.253543Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-18T12:17:50.253623Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T12:17:50.253649Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T12:17:51.941634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T12:17:51.94168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T12:17:51.941704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-18T12:17:51.94172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T12:17:51.941726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-18T12:17:51.941733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-18T12:17:51.941741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-18T12:17:51.943519Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-024443 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T12:17:51.943517Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T12:17:51.943542Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T12:17:51.943739Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T12:17:51.943799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T12:17:51.944852Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-18T12:17:51.944886Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:18:50 up  1:01,  0 user,  load average: 3.85, 4.05, 2.62
	Linux old-k8s-version-024443 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [698a48720393a674c29dfc41bbf1f15059de251c55cf7701f06cd21dd31b76d4] <==
	I1018 12:17:54.342652       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:17:54.343411       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 12:17:54.343612       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:17:54.343629       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:17:54.343651       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:17:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:17:54.602098       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:17:54.602130       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:17:54.602143       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:17:54.602281       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:17:54.943327       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:17:54.943361       1 metrics.go:72] Registering metrics
	I1018 12:17:54.943465       1 controller.go:711] "Syncing nftables rules"
	I1018 12:18:04.603876       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 12:18:04.603967       1 main.go:301] handling current node
	I1018 12:18:14.602636       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 12:18:14.602673       1 main.go:301] handling current node
	I1018 12:18:24.601862       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 12:18:24.601893       1 main.go:301] handling current node
	I1018 12:18:34.604861       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 12:18:34.604901       1 main.go:301] handling current node
	I1018 12:18:44.608152       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 12:18:44.608197       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c1618cf2491e60c5f264f84236c3e565212efb40b779ad4dfc51547e5f21be79] <==
	I1018 12:17:53.062403       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:17:53.108230       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 12:17:53.108291       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 12:17:53.108318       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 12:17:53.108584       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 12:17:53.109244       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1018 12:17:53.109370       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 12:17:53.109382       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 12:17:53.111817       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 12:17:53.111971       1 aggregator.go:166] initial CRD sync complete...
	I1018 12:17:53.111987       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 12:17:53.111994       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 12:17:53.112000       1 cache.go:39] Caches are synced for autoregister controller
	E1018 12:17:53.117000       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 12:17:54.023639       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:17:54.078325       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 12:17:54.190004       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 12:17:54.227465       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:17:54.238676       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:17:54.249154       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 12:17:54.294045       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.18.235"}
	I1018 12:17:54.314548       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.226.219"}
	I1018 12:18:05.671017       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:18:05.944504       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 12:18:06.093196       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [cd847940cd839a77a7dd6283540c50c9b5c0f1ec5b64bfe2ed49728cb0998923] <==
	I1018 12:18:05.949901       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1018 12:18:06.050274       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="458.738182ms"
	I1018 12:18:06.050408       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.924µs"
	I1018 12:18:06.051848       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-7gk7m"
	I1018 12:18:06.051957       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-b8j85"
	I1018 12:18:06.060417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="111.89032ms"
	I1018 12:18:06.060904       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="111.289795ms"
	I1018 12:18:06.068425       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="6.909189ms"
	I1018 12:18:06.068561       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.802µs"
	I1018 12:18:06.072115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.492µs"
	I1018 12:18:06.073055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.576981ms"
	I1018 12:18:06.073156       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.107µs"
	I1018 12:18:06.080944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.222µs"
	I1018 12:18:06.115089       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 12:18:06.127336       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 12:18:06.127373       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 12:18:08.757793       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.32µs"
	I1018 12:18:09.765064       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.016µs"
	I1018 12:18:10.773452       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.055µs"
	I1018 12:18:11.776458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.698857ms"
	I1018 12:18:11.776542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="39.132µs"
	I1018 12:18:28.816589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="106.376µs"
	I1018 12:18:30.609379       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.811123ms"
	I1018 12:18:30.609621       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.446µs"
	I1018 12:18:36.446932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="93.854µs"
	
	
	==> kube-proxy [284392573f4ad6f3703725c92028a746af8799850cd474e5b9d2167b610c0589] <==
	I1018 12:17:54.146276       1 server_others.go:69] "Using iptables proxy"
	I1018 12:17:54.162050       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1018 12:17:54.200488       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:17:54.205105       1 server_others.go:152] "Using iptables Proxier"
	I1018 12:17:54.205280       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 12:17:54.205299       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 12:17:54.205338       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 12:17:54.205677       1 server.go:846] "Version info" version="v1.28.0"
	I1018 12:17:54.205961       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:17:54.207042       1 config.go:188] "Starting service config controller"
	I1018 12:17:54.208476       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 12:17:54.208069       1 config.go:315] "Starting node config controller"
	I1018 12:17:54.208605       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 12:17:54.208096       1 config.go:97] "Starting endpoint slice config controller"
	I1018 12:17:54.208668       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 12:17:54.309092       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1018 12:17:54.309159       1 shared_informer.go:318] Caches are synced for node config
	I1018 12:17:54.309335       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [b9fd7b97fe26af7875425214d9a97dc3856195255cc6b76a7313c710605084a3] <==
	I1018 12:17:50.833235       1 serving.go:348] Generated self-signed cert in-memory
	I1018 12:17:53.097690       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1018 12:17:53.097725       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:17:53.103055       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1018 12:17:53.103143       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:17:53.103181       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1018 12:17:53.103200       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 12:17:53.103101       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 12:17:53.103308       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1018 12:17:53.104159       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1018 12:17:53.104243       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1018 12:17:53.204014       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 12:17:53.204016       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1018 12:17:53.204031       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 18 12:18:06 old-k8s-version-024443 kubelet[726]: I1018 12:18:06.060166     726 topology_manager.go:215] "Topology Admit Handler" podUID="daca9387-7b3a-4193-b10d-25e2c8a391dd" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-7gk7m"
	Oct 18 12:18:06 old-k8s-version-024443 kubelet[726]: I1018 12:18:06.226691     726 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/daca9387-7b3a-4193-b10d-25e2c8a391dd-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-7gk7m\" (UID: \"daca9387-7b3a-4193-b10d-25e2c8a391dd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7gk7m"
	Oct 18 12:18:06 old-k8s-version-024443 kubelet[726]: I1018 12:18:06.226750     726 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/be653a6e-5540-4a5c-a717-68e89ee18574-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-b8j85\" (UID: \"be653a6e-5540-4a5c-a717-68e89ee18574\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85"
	Oct 18 12:18:06 old-k8s-version-024443 kubelet[726]: I1018 12:18:06.226786     726 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hv22\" (UniqueName: \"kubernetes.io/projected/be653a6e-5540-4a5c-a717-68e89ee18574-kube-api-access-9hv22\") pod \"dashboard-metrics-scraper-5f989dc9cf-b8j85\" (UID: \"be653a6e-5540-4a5c-a717-68e89ee18574\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85"
	Oct 18 12:18:06 old-k8s-version-024443 kubelet[726]: I1018 12:18:06.226926     726 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmg6x\" (UniqueName: \"kubernetes.io/projected/daca9387-7b3a-4193-b10d-25e2c8a391dd-kube-api-access-xmg6x\") pod \"kubernetes-dashboard-8694d4445c-7gk7m\" (UID: \"daca9387-7b3a-4193-b10d-25e2c8a391dd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7gk7m"
	Oct 18 12:18:08 old-k8s-version-024443 kubelet[726]: I1018 12:18:08.743579     726 scope.go:117] "RemoveContainer" containerID="9c8e1225a05abdfbc00fc62b5bc0984915505d934949eeee0939613801fd9443"
	Oct 18 12:18:09 old-k8s-version-024443 kubelet[726]: I1018 12:18:09.747982     726 scope.go:117] "RemoveContainer" containerID="9c8e1225a05abdfbc00fc62b5bc0984915505d934949eeee0939613801fd9443"
	Oct 18 12:18:09 old-k8s-version-024443 kubelet[726]: I1018 12:18:09.748364     726 scope.go:117] "RemoveContainer" containerID="e42da0511b3f401feeb10b48e5ec8f7ff95c92fa590e6b79ffd56caa437209fc"
	Oct 18 12:18:09 old-k8s-version-024443 kubelet[726]: E1018 12:18:09.749128     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8j85_kubernetes-dashboard(be653a6e-5540-4a5c-a717-68e89ee18574)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85" podUID="be653a6e-5540-4a5c-a717-68e89ee18574"
	Oct 18 12:18:10 old-k8s-version-024443 kubelet[726]: I1018 12:18:10.754612     726 scope.go:117] "RemoveContainer" containerID="e42da0511b3f401feeb10b48e5ec8f7ff95c92fa590e6b79ffd56caa437209fc"
	Oct 18 12:18:10 old-k8s-version-024443 kubelet[726]: E1018 12:18:10.755009     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8j85_kubernetes-dashboard(be653a6e-5540-4a5c-a717-68e89ee18574)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85" podUID="be653a6e-5540-4a5c-a717-68e89ee18574"
	Oct 18 12:18:11 old-k8s-version-024443 kubelet[726]: I1018 12:18:11.769698     726 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7gk7m" podStartSLOduration=0.579229247 podCreationTimestamp="2025-10-18 12:18:06 +0000 UTC" firstStartedPulling="2025-10-18 12:18:06.383538914 +0000 UTC m=+16.806469921" lastFinishedPulling="2025-10-18 12:18:11.573946323 +0000 UTC m=+21.996877330" observedRunningTime="2025-10-18 12:18:11.769531951 +0000 UTC m=+22.192462964" watchObservedRunningTime="2025-10-18 12:18:11.769636656 +0000 UTC m=+22.192567671"
	Oct 18 12:18:16 old-k8s-version-024443 kubelet[726]: I1018 12:18:16.360196     726 scope.go:117] "RemoveContainer" containerID="e42da0511b3f401feeb10b48e5ec8f7ff95c92fa590e6b79ffd56caa437209fc"
	Oct 18 12:18:16 old-k8s-version-024443 kubelet[726]: E1018 12:18:16.360548     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8j85_kubernetes-dashboard(be653a6e-5540-4a5c-a717-68e89ee18574)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85" podUID="be653a6e-5540-4a5c-a717-68e89ee18574"
	Oct 18 12:18:24 old-k8s-version-024443 kubelet[726]: I1018 12:18:24.789401     726 scope.go:117] "RemoveContainer" containerID="1a759c1022fc648d15de94f7193598eb07b5a7f318b6e11d24a4702d3ec03b78"
	Oct 18 12:18:28 old-k8s-version-024443 kubelet[726]: I1018 12:18:28.673075     726 scope.go:117] "RemoveContainer" containerID="e42da0511b3f401feeb10b48e5ec8f7ff95c92fa590e6b79ffd56caa437209fc"
	Oct 18 12:18:28 old-k8s-version-024443 kubelet[726]: I1018 12:18:28.803451     726 scope.go:117] "RemoveContainer" containerID="e42da0511b3f401feeb10b48e5ec8f7ff95c92fa590e6b79ffd56caa437209fc"
	Oct 18 12:18:28 old-k8s-version-024443 kubelet[726]: I1018 12:18:28.803710     726 scope.go:117] "RemoveContainer" containerID="8b3e716afde9f48058617565b8e95c5e8259830581a273cf2d765c1152eb3ffd"
	Oct 18 12:18:28 old-k8s-version-024443 kubelet[726]: E1018 12:18:28.804132     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8j85_kubernetes-dashboard(be653a6e-5540-4a5c-a717-68e89ee18574)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85" podUID="be653a6e-5540-4a5c-a717-68e89ee18574"
	Oct 18 12:18:36 old-k8s-version-024443 kubelet[726]: I1018 12:18:36.360599     726 scope.go:117] "RemoveContainer" containerID="8b3e716afde9f48058617565b8e95c5e8259830581a273cf2d765c1152eb3ffd"
	Oct 18 12:18:36 old-k8s-version-024443 kubelet[726]: E1018 12:18:36.361036     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8j85_kubernetes-dashboard(be653a6e-5540-4a5c-a717-68e89ee18574)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8j85" podUID="be653a6e-5540-4a5c-a717-68e89ee18574"
	Oct 18 12:18:44 old-k8s-version-024443 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 12:18:44 old-k8s-version-024443 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 12:18:44 old-k8s-version-024443 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 12:18:44 old-k8s-version-024443 systemd[1]: kubelet.service: Consumed 1.610s CPU time.
	
	
	==> kubernetes-dashboard [7639427c91a82a37b0a5b9d91dc9de5ccbb5db91445889266a268aaf57c64ddb] <==
	2025/10/18 12:18:11 Using namespace: kubernetes-dashboard
	2025/10/18 12:18:11 Using in-cluster config to connect to apiserver
	2025/10/18 12:18:11 Using secret token for csrf signing
	2025/10/18 12:18:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 12:18:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 12:18:11 Successful initial request to the apiserver, version: v1.28.0
	2025/10/18 12:18:11 Generating JWE encryption key
	2025/10/18 12:18:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 12:18:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 12:18:11 Initializing JWE encryption key from synchronized object
	2025/10/18 12:18:11 Creating in-cluster Sidecar client
	2025/10/18 12:18:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 12:18:11 Serving insecurely on HTTP port: 9090
	2025/10/18 12:18:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 12:18:11 Starting overwatch
	
	
	==> storage-provisioner [1a759c1022fc648d15de94f7193598eb07b5a7f318b6e11d24a4702d3ec03b78] <==
	I1018 12:17:54.121104       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 12:18:24.127204       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [247925a32df258cd29376583f360c15f442b55a9f1a8b643d4538383ac9c74a7] <==
	I1018 12:18:24.848728       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 12:18:24.856818       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 12:18:24.856860       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 12:18:42.257407       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 12:18:42.257552       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ea2eab2-c98b-4fde-9bd6-441433386ca3", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-024443_cace15f0-1613-4a1e-96c3-83d339046a85 became leader
	I1018 12:18:42.257604       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-024443_cace15f0-1613-4a1e-96c3-83d339046a85!
	I1018 12:18:42.357808       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-024443_cace15f0-1613-4a1e-96c3-83d339046a85!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-024443 -n old-k8s-version-024443
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-024443 -n old-k8s-version-024443: exit status 2 (411.23519ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-024443 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-028309 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-028309 --alsologtostderr -v=1: exit status 80 (1.840018739s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-028309 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:19:20.550872  329523 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:19:20.551192  329523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:20.551203  329523 out.go:374] Setting ErrFile to fd 2...
	I1018 12:19:20.551207  329523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:20.551455  329523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:19:20.551731  329523 out.go:368] Setting JSON to false
	I1018 12:19:20.551834  329523 mustload.go:65] Loading cluster: default-k8s-diff-port-028309
	I1018 12:19:20.552389  329523 config.go:182] Loaded profile config "default-k8s-diff-port-028309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:20.553004  329523 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-028309 --format={{.State.Status}}
	I1018 12:19:20.576445  329523 host.go:66] Checking if "default-k8s-diff-port-028309" exists ...
	I1018 12:19:20.576830  329523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:19:20.660940  329523 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:19:20.64632125 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:19:20.662532  329523 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-028309 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 12:19:20.664481  329523 out.go:179] * Pausing node default-k8s-diff-port-028309 ... 
	I1018 12:19:20.665916  329523 host.go:66] Checking if "default-k8s-diff-port-028309" exists ...
	I1018 12:19:20.666296  329523 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:20.666343  329523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-028309
	I1018 12:19:20.692322  329523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/default-k8s-diff-port-028309/id_rsa Username:docker}
	I1018 12:19:20.796954  329523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:19:20.827293  329523 pause.go:52] kubelet running: true
	I1018 12:19:20.827363  329523 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:19:21.009858  329523 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:19:21.009937  329523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:19:21.081041  329523 cri.go:89] found id: "7badc800fa4039e5ced42d3de7cb9486ff1368bed00b2093776a0935921d9a3d"
	I1018 12:19:21.081066  329523 cri.go:89] found id: "3a791b10f6b7292113c4ab4334268fa9103739de78ecf9577cda655bc7e04ad8"
	I1018 12:19:21.081070  329523 cri.go:89] found id: "3d8531f8819a155bae8f5276bec64b4d55f23d29586c6dc59ecee2e01d0eac4c"
	I1018 12:19:21.081074  329523 cri.go:89] found id: "beda0d0ad2456588c42c64e748d9c9a3a59ec5a890826c601cd42d1a48c80717"
	I1018 12:19:21.081077  329523 cri.go:89] found id: "134c68115df400299f718a242dcc3487786865366d4c86ae9057813ce2261cb7"
	I1018 12:19:21.081080  329523 cri.go:89] found id: "47b0a89c606a2ed0c69b3d57a1254c989803ac5ff1e9913ca52c6c7b7c451aa9"
	I1018 12:19:21.081083  329523 cri.go:89] found id: "98cd3ecd97b52b4667430825deaaf5b42f0481bce7f80bdb63cc7d18be3f2c43"
	I1018 12:19:21.081085  329523 cri.go:89] found id: "b4e6ed35e6415d74f156e6f9b2caf8f4eee3580d9a2b0e69aa0489217f5ecff8"
	I1018 12:19:21.081088  329523 cri.go:89] found id: "7f679fa5b11a9e7c241aa782944e0a63d28817b54b5a1f2424c606492f4167fd"
	I1018 12:19:21.081100  329523 cri.go:89] found id: "6ef023ef21b14bff971ec47fc55a7ec6c3d7bcc299038c2b4624ba8d4e33f5d2"
	I1018 12:19:21.081103  329523 cri.go:89] found id: "4b69327aa0d0a64fdafbee660e64555b3ddd443d95b2e8615a545e1a1776ef12"
	I1018 12:19:21.081105  329523 cri.go:89] found id: ""
	I1018 12:19:21.081145  329523 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:21.093901  329523 retry.go:31] will retry after 275.520524ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:21Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:19:21.370393  329523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:19:21.383931  329523 pause.go:52] kubelet running: false
	I1018 12:19:21.383998  329523 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:19:21.532939  329523 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:19:21.533019  329523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:19:21.614872  329523 cri.go:89] found id: "7badc800fa4039e5ced42d3de7cb9486ff1368bed00b2093776a0935921d9a3d"
	I1018 12:19:21.614893  329523 cri.go:89] found id: "3a791b10f6b7292113c4ab4334268fa9103739de78ecf9577cda655bc7e04ad8"
	I1018 12:19:21.614897  329523 cri.go:89] found id: "3d8531f8819a155bae8f5276bec64b4d55f23d29586c6dc59ecee2e01d0eac4c"
	I1018 12:19:21.614902  329523 cri.go:89] found id: "beda0d0ad2456588c42c64e748d9c9a3a59ec5a890826c601cd42d1a48c80717"
	I1018 12:19:21.614906  329523 cri.go:89] found id: "134c68115df400299f718a242dcc3487786865366d4c86ae9057813ce2261cb7"
	I1018 12:19:21.614911  329523 cri.go:89] found id: "47b0a89c606a2ed0c69b3d57a1254c989803ac5ff1e9913ca52c6c7b7c451aa9"
	I1018 12:19:21.614915  329523 cri.go:89] found id: "98cd3ecd97b52b4667430825deaaf5b42f0481bce7f80bdb63cc7d18be3f2c43"
	I1018 12:19:21.614919  329523 cri.go:89] found id: "b4e6ed35e6415d74f156e6f9b2caf8f4eee3580d9a2b0e69aa0489217f5ecff8"
	I1018 12:19:21.614923  329523 cri.go:89] found id: "7f679fa5b11a9e7c241aa782944e0a63d28817b54b5a1f2424c606492f4167fd"
	I1018 12:19:21.614947  329523 cri.go:89] found id: "6ef023ef21b14bff971ec47fc55a7ec6c3d7bcc299038c2b4624ba8d4e33f5d2"
	I1018 12:19:21.614954  329523 cri.go:89] found id: "4b69327aa0d0a64fdafbee660e64555b3ddd443d95b2e8615a545e1a1776ef12"
	I1018 12:19:21.614956  329523 cri.go:89] found id: ""
	I1018 12:19:21.614991  329523 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:21.629449  329523 retry.go:31] will retry after 410.355342ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:21Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:19:22.040979  329523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:19:22.055222  329523 pause.go:52] kubelet running: false
	I1018 12:19:22.055308  329523 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:19:22.228704  329523 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:19:22.228812  329523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:19:22.302898  329523 cri.go:89] found id: "7badc800fa4039e5ced42d3de7cb9486ff1368bed00b2093776a0935921d9a3d"
	I1018 12:19:22.302928  329523 cri.go:89] found id: "3a791b10f6b7292113c4ab4334268fa9103739de78ecf9577cda655bc7e04ad8"
	I1018 12:19:22.302932  329523 cri.go:89] found id: "3d8531f8819a155bae8f5276bec64b4d55f23d29586c6dc59ecee2e01d0eac4c"
	I1018 12:19:22.302936  329523 cri.go:89] found id: "beda0d0ad2456588c42c64e748d9c9a3a59ec5a890826c601cd42d1a48c80717"
	I1018 12:19:22.302938  329523 cri.go:89] found id: "134c68115df400299f718a242dcc3487786865366d4c86ae9057813ce2261cb7"
	I1018 12:19:22.302944  329523 cri.go:89] found id: "47b0a89c606a2ed0c69b3d57a1254c989803ac5ff1e9913ca52c6c7b7c451aa9"
	I1018 12:19:22.302947  329523 cri.go:89] found id: "98cd3ecd97b52b4667430825deaaf5b42f0481bce7f80bdb63cc7d18be3f2c43"
	I1018 12:19:22.302950  329523 cri.go:89] found id: "b4e6ed35e6415d74f156e6f9b2caf8f4eee3580d9a2b0e69aa0489217f5ecff8"
	I1018 12:19:22.302952  329523 cri.go:89] found id: "7f679fa5b11a9e7c241aa782944e0a63d28817b54b5a1f2424c606492f4167fd"
	I1018 12:19:22.302977  329523 cri.go:89] found id: "6ef023ef21b14bff971ec47fc55a7ec6c3d7bcc299038c2b4624ba8d4e33f5d2"
	I1018 12:19:22.302981  329523 cri.go:89] found id: "4b69327aa0d0a64fdafbee660e64555b3ddd443d95b2e8615a545e1a1776ef12"
	I1018 12:19:22.302985  329523 cri.go:89] found id: ""
	I1018 12:19:22.303045  329523 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:22.319488  329523 out.go:203] 
	W1018 12:19:22.322322  329523 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:19:22.322351  329523 out.go:285] * 
	* 
	W1018 12:19:22.328333  329523 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:19:22.330793  329523 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-028309 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-028309
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-028309:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63",
	        "Created": "2025-10-18T12:17:15.571662487Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 317387,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:18:19.407164276Z",
	            "FinishedAt": "2025-10-18T12:18:18.13601315Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63/hostname",
	        "HostsPath": "/var/lib/docker/containers/189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63/hosts",
	        "LogPath": "/var/lib/docker/containers/189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63/189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63-json.log",
	        "Name": "/default-k8s-diff-port-028309",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-028309:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-028309",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63",
	                "LowerDir": "/var/lib/docker/overlay2/7c3ff02d9edfcdd2a7ea282d3d34f3f417c0e8e17e7349aa6c54d520ceea71c4-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c3ff02d9edfcdd2a7ea282d3d34f3f417c0e8e17e7349aa6c54d520ceea71c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c3ff02d9edfcdd2a7ea282d3d34f3f417c0e8e17e7349aa6c54d520ceea71c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c3ff02d9edfcdd2a7ea282d3d34f3f417c0e8e17e7349aa6c54d520ceea71c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-028309",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-028309/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-028309",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-028309",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-028309",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4b29c45c1b504a92c3379b04b101fa55c150bbd5c02cebe4a911ac749596a940",
	            "SandboxKey": "/var/run/docker/netns/4b29c45c1b50",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-028309": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:9d:52:e1:5f:54",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9cb7bc9061ba59e01198e7ea5f6cf6ddd6ba962ca18f957a0fbcc8a6c5eef0e9",
	                    "EndpointID": "78ebf6fc33e2ba48861b9301ad856c0de86acd8c360167e19e3a99e7ec528de6",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-028309",
	                        "189b5ecbc2d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-028309 -n default-k8s-diff-port-028309
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-028309 -n default-k8s-diff-port-028309: exit status 2 (339.699162ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-028309 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-028309 logs -n 25: (1.179127787s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-406541 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-024443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p old-k8s-version-024443 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable dashboard -p no-preload-406541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p no-preload-406541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-028309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-028309 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-175371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ stop    │ -p embed-certs-175371 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-028309 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-175371 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p embed-certs-175371 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ image   │ no-preload-406541 image list --format=json                                                                                                                                                                                                    │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p no-preload-406541 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ image   │ old-k8s-version-024443 image list --format=json                                                                                                                                                                                               │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p old-k8s-version-024443 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ delete  │ -p no-preload-406541                                                                                                                                                                                                                          │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ delete  │ -p old-k8s-version-024443                                                                                                                                                                                                                     │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ delete  │ -p old-k8s-version-024443                                                                                                                                                                                                                     │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p newest-cni-579606 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ delete  │ -p no-preload-406541                                                                                                                                                                                                                          │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ default-k8s-diff-port-028309 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ pause   │ -p default-k8s-diff-port-028309 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-579606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:18:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:18:54.845878  326490 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:18:54.846118  326490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:54.846127  326490 out.go:374] Setting ErrFile to fd 2...
	I1018 12:18:54.846131  326490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:54.846326  326490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:18:54.846865  326490 out.go:368] Setting JSON to false
	I1018 12:18:54.848113  326490 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3683,"bootTime":1760786252,"procs":381,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:18:54.848206  326490 start.go:141] virtualization: kvm guest
	I1018 12:18:54.851418  326490 out.go:179] * [newest-cni-579606] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:18:54.856390  326490 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:18:54.856377  326490 notify.go:220] Checking for updates...
	I1018 12:18:54.857910  326490 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:18:54.859215  326490 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:54.860446  326490 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:18:54.861847  326490 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:18:54.863137  326490 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:18:54.864900  326490 config.go:182] Loaded profile config "default-k8s-diff-port-028309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:54.864984  326490 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:54.865092  326490 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:18:54.888492  326490 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:18:54.888598  326490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:54.953711  326490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 12:18:54.941671438 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:54.953923  326490 docker.go:318] overlay module found
	I1018 12:18:54.958794  326490 out.go:179] * Using the docker driver based on user configuration
	I1018 12:18:54.960013  326490 start.go:305] selected driver: docker
	I1018 12:18:54.960033  326490 start.go:925] validating driver "docker" against <nil>
	I1018 12:18:54.960046  326490 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:18:54.960615  326490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:55.022513  326490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 12:18:55.011731081 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:55.022798  326490 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 12:18:55.022840  326490 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 12:18:55.023141  326490 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 12:18:55.025322  326490 out.go:179] * Using Docker driver with root privileges
	I1018 12:18:55.026401  326490 cni.go:84] Creating CNI manager for ""
	I1018 12:18:55.026484  326490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:18:55.026498  326490 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:18:55.026560  326490 start.go:349] cluster config:
	{Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:55.027938  326490 out.go:179] * Starting "newest-cni-579606" primary control-plane node in "newest-cni-579606" cluster
	I1018 12:18:55.029100  326490 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:18:55.030360  326490 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:18:55.031422  326490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:55.031468  326490 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 12:18:55.031489  326490 cache.go:58] Caching tarball of preloaded images
	I1018 12:18:55.031522  326490 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:18:55.031591  326490 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:18:55.031603  326490 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:18:55.031705  326490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json ...
	I1018 12:18:55.031726  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json: {Name:mk20e362fc30401f09fc034ac5a55088adce3cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:55.053307  326490 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:18:55.053326  326490 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:18:55.053342  326490 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:18:55.053373  326490 start.go:360] acquireMachinesLock for newest-cni-579606: {Name:mk4161cf0bf2eb93a8110dc388332ec9ca8fc5ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:18:55.053467  326490 start.go:364] duration metric: took 78.123µs to acquireMachinesLock for "newest-cni-579606"
	I1018 12:18:55.053489  326490 start.go:93] Provisioning new machine with config: &{Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:18:55.053550  326490 start.go:125] createHost starting for "" (driver="docker")
	W1018 12:18:51.958241  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:18:53.959108  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:18:55.846032  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:58.346225  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:55.055345  326490 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 12:18:55.055547  326490 start.go:159] libmachine.API.Create for "newest-cni-579606" (driver="docker")
	I1018 12:18:55.055575  326490 client.go:168] LocalClient.Create starting
	I1018 12:18:55.055636  326490 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem
	I1018 12:18:55.055669  326490 main.go:141] libmachine: Decoding PEM data...
	I1018 12:18:55.055683  326490 main.go:141] libmachine: Parsing certificate...
	I1018 12:18:55.055736  326490 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem
	I1018 12:18:55.055773  326490 main.go:141] libmachine: Decoding PEM data...
	I1018 12:18:55.055796  326490 main.go:141] libmachine: Parsing certificate...
	I1018 12:18:55.056153  326490 cli_runner.go:164] Run: docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 12:18:55.073803  326490 cli_runner.go:211] docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 12:18:55.073868  326490 network_create.go:284] running [docker network inspect newest-cni-579606] to gather additional debugging logs...
	I1018 12:18:55.073887  326490 cli_runner.go:164] Run: docker network inspect newest-cni-579606
	W1018 12:18:55.092574  326490 cli_runner.go:211] docker network inspect newest-cni-579606 returned with exit code 1
	I1018 12:18:55.092605  326490 network_create.go:287] error running [docker network inspect newest-cni-579606]: docker network inspect newest-cni-579606: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-579606 not found
	I1018 12:18:55.092623  326490 network_create.go:289] output of [docker network inspect newest-cni-579606]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-579606 not found
	
	** /stderr **
	I1018 12:18:55.092788  326490 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:18:55.111259  326490 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1c78aef7d2ee IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:19:5a:10:36:f4} reservation:<nil>}
	I1018 12:18:55.111908  326490 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6069a4ec9777 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:f7:2a:6b:48:b9} reservation:<nil>}
	I1018 12:18:55.112751  326490 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-670e794a7c9f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:d0:78:df:c7:fd} reservation:<nil>}
	I1018 12:18:55.113423  326490 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8bb34d522296 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:fc:1a:65:23:03} reservation:<nil>}
	I1018 12:18:55.114281  326490 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc7b00}
	I1018 12:18:55.114303  326490 network_create.go:124] attempt to create docker network newest-cni-579606 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 12:18:55.114345  326490 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-579606 newest-cni-579606
	I1018 12:18:55.175643  326490 network_create.go:108] docker network newest-cni-579606 192.168.85.0/24 created
	I1018 12:18:55.175691  326490 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-579606" container
	I1018 12:18:55.175752  326490 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 12:18:55.193582  326490 cli_runner.go:164] Run: docker volume create newest-cni-579606 --label name.minikube.sigs.k8s.io=newest-cni-579606 --label created_by.minikube.sigs.k8s.io=true
	I1018 12:18:55.212499  326490 oci.go:103] Successfully created a docker volume newest-cni-579606
	I1018 12:18:55.212595  326490 cli_runner.go:164] Run: docker run --rm --name newest-cni-579606-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-579606 --entrypoint /usr/bin/test -v newest-cni-579606:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 12:18:55.635994  326490 oci.go:107] Successfully prepared a docker volume newest-cni-579606
	I1018 12:18:55.636038  326490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:55.636063  326490 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 12:18:55.636128  326490 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-579606:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 12:18:56.458229  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:18:58.958191  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	I1018 12:19:00.126774  326490 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-579606:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.490575425s)
	I1018 12:19:00.126807  326490 kic.go:203] duration metric: took 4.4907405s to extract preloaded images to volume ...
	W1018 12:19:00.126891  326490 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 12:19:00.126924  326490 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 12:19:00.126991  326490 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 12:19:00.190480  326490 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-579606 --name newest-cni-579606 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-579606 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-579606 --network newest-cni-579606 --ip 192.168.85.2 --volume newest-cni-579606:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 12:19:00.476973  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Running}}
	I1018 12:19:00.495553  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:00.516545  326490 cli_runner.go:164] Run: docker exec newest-cni-579606 stat /var/lib/dpkg/alternatives/iptables
	I1018 12:19:00.562561  326490 oci.go:144] the created container "newest-cni-579606" has a running status.
	I1018 12:19:00.562609  326490 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa...
	I1018 12:19:00.820117  326490 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 12:19:00.854117  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:00.877422  326490 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 12:19:00.877449  326490 kic_runner.go:114] Args: [docker exec --privileged newest-cni-579606 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 12:19:00.925342  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:00.944520  326490 machine.go:93] provisionDockerMachine start ...
	I1018 12:19:00.944616  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:00.964493  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:00.964838  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:00.964858  326490 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:19:01.103775  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-579606
	
	I1018 12:19:01.103807  326490 ubuntu.go:182] provisioning hostname "newest-cni-579606"
	I1018 12:19:01.103880  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.124094  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:01.124376  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:01.124392  326490 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-579606 && echo "newest-cni-579606" | sudo tee /etc/hostname
	I1018 12:19:01.270628  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-579606
	
	I1018 12:19:01.270703  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.289410  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:01.289674  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:01.289696  326490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-579606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-579606/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-579606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:19:01.423556  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:19:01.423583  326490 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:19:01.423603  326490 ubuntu.go:190] setting up certificates
	I1018 12:19:01.423619  326490 provision.go:84] configureAuth start
	I1018 12:19:01.423685  326490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:01.442627  326490 provision.go:143] copyHostCerts
	I1018 12:19:01.442683  326490 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:19:01.442692  326490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:19:01.442779  326490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:19:01.442877  326490 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:19:01.442887  326490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:19:01.442920  326490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:19:01.443028  326490 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:19:01.443058  326490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:19:01.443088  326490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:19:01.443142  326490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.newest-cni-579606 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-579606]
	I1018 12:19:01.605969  326490 provision.go:177] copyRemoteCerts
	I1018 12:19:01.606038  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:19:01.606085  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.625297  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:01.723582  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 12:19:01.744640  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:19:01.763599  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 12:19:01.784423  326490 provision.go:87] duration metric: took 360.788993ms to configureAuth
	I1018 12:19:01.784458  326490 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:19:01.784652  326490 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:01.784752  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.804299  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:01.804508  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:01.804524  326490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:19:02.051413  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:19:02.051436  326490 machine.go:96] duration metric: took 1.106891251s to provisionDockerMachine
	I1018 12:19:02.051444  326490 client.go:171] duration metric: took 6.995862509s to LocalClient.Create
	I1018 12:19:02.051460  326490 start.go:167] duration metric: took 6.995914544s to libmachine.API.Create "newest-cni-579606"
	I1018 12:19:02.051470  326490 start.go:293] postStartSetup for "newest-cni-579606" (driver="docker")
	I1018 12:19:02.051482  326490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:19:02.051542  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:19:02.051582  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.069826  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.169332  326490 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:19:02.173028  326490 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:19:02.173060  326490 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:19:02.173075  326490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:19:02.173131  326490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:19:02.173202  326490 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:19:02.173312  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:19:02.181632  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:19:02.201730  326490 start.go:296] duration metric: took 150.246741ms for postStartSetup
	I1018 12:19:02.202117  326490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:02.220168  326490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json ...
	I1018 12:19:02.220438  326490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:19:02.220477  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.238665  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.333039  326490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:19:02.337804  326490 start.go:128] duration metric: took 7.284234042s to createHost
	I1018 12:19:02.337830  326490 start.go:83] releasing machines lock for "newest-cni-579606", held for 7.284352735s
	I1018 12:19:02.337891  326490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:02.357339  326490 ssh_runner.go:195] Run: cat /version.json
	I1018 12:19:02.357373  326490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:19:02.357386  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.357430  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.376606  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.377490  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.526194  326490 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:02.532929  326490 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:19:02.568991  326490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:19:02.574362  326490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:19:02.574428  326490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:19:02.602949  326490 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 12:19:02.602987  326490 start.go:495] detecting cgroup driver to use...
	I1018 12:19:02.603019  326490 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:19:02.603065  326490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:19:02.619432  326490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:19:02.632985  326490 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:19:02.633047  326490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:19:02.650953  326490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:19:02.670802  326490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:19:02.756116  326490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:19:02.848839  326490 docker.go:234] disabling docker service ...
	I1018 12:19:02.848900  326490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:19:02.868131  326490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:19:02.881575  326490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:19:02.965443  326490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:19:03.051508  326490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:19:03.064380  326490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:19:03.079484  326490 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:19:03.079554  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.090169  326490 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:19:03.090229  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.099749  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.109431  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.118802  326490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:19:03.127410  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.136357  326490 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.151150  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.160956  326490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:19:03.169094  326490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:19:03.177522  326490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:03.257714  326490 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:19:03.374283  326490 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:19:03.374356  326490 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:19:03.378571  326490 start.go:563] Will wait 60s for crictl version
	I1018 12:19:03.378624  326490 ssh_runner.go:195] Run: which crictl
	I1018 12:19:03.382638  326490 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:19:03.406896  326490 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:19:03.406996  326490 ssh_runner.go:195] Run: crio --version
	I1018 12:19:03.436202  326490 ssh_runner.go:195] Run: crio --version
	I1018 12:19:03.466606  326490 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:19:03.468046  326490 cli_runner.go:164] Run: docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:19:03.485613  326490 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 12:19:03.489792  326490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:19:03.502123  326490 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1018 12:19:00.846128  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:19:03.345904  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:19:03.503451  326490 kubeadm.go:883] updating cluster {Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:19:03.503568  326490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:19:03.503623  326490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:19:03.537963  326490 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:19:03.537988  326490 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:19:03.538037  326490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:19:03.564020  326490 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:19:03.564061  326490 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:19:03.564071  326490 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 12:19:03.564172  326490 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-579606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:19:03.564251  326490 ssh_runner.go:195] Run: crio config
	I1018 12:19:03.609404  326490 cni.go:84] Creating CNI manager for ""
	I1018 12:19:03.609430  326490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:19:03.609446  326490 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 12:19:03.609473  326490 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-579606 NodeName:newest-cni-579606 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:19:03.609666  326490 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-579606"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:19:03.609744  326490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:19:03.618201  326490 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:19:03.618283  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:19:03.626679  326490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 12:19:03.639983  326490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:19:03.655953  326490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1018 12:19:03.668846  326490 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:19:03.672666  326490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:19:03.683073  326490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:03.766600  326490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:19:03.797248  326490 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606 for IP: 192.168.85.2
	I1018 12:19:03.797269  326490 certs.go:195] generating shared ca certs ...
	I1018 12:19:03.797296  326490 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:03.797445  326490 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:19:03.797500  326490 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:19:03.797513  326490 certs.go:257] generating profile certs ...
	I1018 12:19:03.797585  326490 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key
	I1018 12:19:03.797609  326490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.crt with IP's: []
	I1018 12:19:04.196975  326490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.crt ...
	I1018 12:19:04.197011  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.crt: {Name:mka42a654d079c2a23058a0f14154e8b79ca5459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.197222  326490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key ...
	I1018 12:19:04.197241  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key: {Name:mk220b04a2afae0bcb10852575c558c1404f1005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.197355  326490 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad
	I1018 12:19:04.197378  326490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 12:19:04.310285  326490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad ...
	I1018 12:19:04.310312  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad: {Name:mke978bbcfe8f1a2cbf3531371f43b4028ef678e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.310509  326490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad ...
	I1018 12:19:04.310528  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad: {Name:mk42b24c0f6b076eda0e07dce8424a94f5271da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.310658  326490 certs.go:382] copying /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad -> /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt
	I1018 12:19:04.310784  326490 certs.go:386] copying /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad -> /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key
	I1018 12:19:04.310873  326490 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key
	I1018 12:19:04.310898  326490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt with IP's: []
	I1018 12:19:04.385339  326490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt ...
	I1018 12:19:04.385370  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt: {Name:mk66f445c5bca9cdd3c55e6ee197ee7cb14dae9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.385567  326490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key ...
	I1018 12:19:04.385584  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key: {Name:mk29fee630df834569bfa6e21a7cc861705c1451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.385849  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem (1338 bytes)
	W1018 12:19:04.385893  326490 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360_empty.pem, impossibly tiny 0 bytes
	I1018 12:19:04.385908  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:19:04.385940  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:19:04.385972  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:19:04.386016  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 12:19:04.386076  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:19:04.386584  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:19:04.405651  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:19:04.423574  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:19:04.441442  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:19:04.460483  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 12:19:04.478325  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:19:04.496004  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:19:04.514077  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:19:04.532154  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem --> /usr/share/ca-certificates/9360.pem (1338 bytes)
	I1018 12:19:04.552898  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /usr/share/ca-certificates/93602.pem (1708 bytes)
	I1018 12:19:04.572871  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:19:04.593879  326490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:19:04.608514  326490 ssh_runner.go:195] Run: openssl version
	I1018 12:19:04.615149  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93602.pem && ln -fs /usr/share/ca-certificates/93602.pem /etc/ssl/certs/93602.pem"
	I1018 12:19:04.624305  326490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93602.pem
	I1018 12:19:04.628375  326490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/93602.pem
	I1018 12:19:04.628425  326490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93602.pem
	I1018 12:19:04.663623  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93602.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:19:04.673411  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:19:04.682605  326490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:04.686974  326490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:04.687061  326490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:04.724063  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:19:04.733543  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9360.pem && ln -fs /usr/share/ca-certificates/9360.pem /etc/ssl/certs/9360.pem"
	I1018 12:19:04.742538  326490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9360.pem
	I1018 12:19:04.746549  326490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9360.pem
	I1018 12:19:04.746601  326490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9360.pem
	I1018 12:19:04.781517  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9360.pem /etc/ssl/certs/51391683.0"
	I1018 12:19:04.791034  326490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:19:04.794955  326490 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:19:04.795012  326490 kubeadm.go:400] StartCluster: {Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:19:04.795092  326490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:04.795154  326490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:04.823284  326490 cri.go:89] found id: ""
	I1018 12:19:04.823356  326490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:19:04.832075  326490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:19:04.840408  326490 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 12:19:04.840478  326490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	W1018 12:19:00.958896  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:03.459593  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:05.845166  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:19:07.344832  317167 pod_ready.go:94] pod "coredns-66bc5c9577-7qgqj" is "Ready"
	I1018 12:19:07.344882  317167 pod_ready.go:86] duration metric: took 37.505154401s for pod "coredns-66bc5c9577-7qgqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.347549  317167 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.351825  317167 pod_ready.go:94] pod "etcd-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:07.351851  317167 pod_ready.go:86] duration metric: took 4.270969ms for pod "etcd-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.353893  317167 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.357781  317167 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:07.357802  317167 pod_ready.go:86] duration metric: took 3.889439ms for pod "kube-apiserver-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.359743  317167 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.543689  317167 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:07.543718  317167 pod_ready.go:86] duration metric: took 183.92899ms for pod "kube-controller-manager-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.742726  317167 pod_ready.go:83] waiting for pod "kube-proxy-bffkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.142748  317167 pod_ready.go:94] pod "kube-proxy-bffkr" is "Ready"
	I1018 12:19:08.142797  317167 pod_ready.go:86] duration metric: took 400.045074ms for pod "kube-proxy-bffkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.343168  317167 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.743587  317167 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:08.743618  317167 pod_ready.go:86] duration metric: took 400.420854ms for pod "kube-scheduler-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.743633  317167 pod_ready.go:40] duration metric: took 38.908363338s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:19:08.790224  317167 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:08.792295  317167 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-028309" cluster and "default" namespace by default
	I1018 12:19:04.849545  326490 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:19:04.849562  326490 kubeadm.go:157] found existing configuration files:
	
	I1018 12:19:04.849600  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:19:04.857827  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:19:04.857889  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:19:04.865939  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:19:04.873915  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:19:04.873983  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:19:04.881861  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:19:04.890019  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:19:04.890088  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:19:04.898082  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:19:04.906181  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:19:04.906236  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:19:04.914044  326490 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 12:19:04.975919  326490 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 12:19:05.037824  326490 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1018 12:19:05.957990  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:07.958857  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:09.958915  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:12.459097  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	I1018 12:19:14.458133  319485 pod_ready.go:94] pod "coredns-66bc5c9577-b6h9l" is "Ready"
	I1018 12:19:14.458159  319485 pod_ready.go:86] duration metric: took 31.505202758s for pod "coredns-66bc5c9577-b6h9l" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.459959  319485 pod_ready.go:83] waiting for pod "etcd-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.463248  319485 pod_ready.go:94] pod "etcd-embed-certs-175371" is "Ready"
	I1018 12:19:14.463270  319485 pod_ready.go:86] duration metric: took 3.284914ms for pod "etcd-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.465089  319485 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.468551  319485 pod_ready.go:94] pod "kube-apiserver-embed-certs-175371" is "Ready"
	I1018 12:19:14.468570  319485 pod_ready.go:86] duration metric: took 3.458555ms for pod "kube-apiserver-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.470303  319485 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.657339  319485 pod_ready.go:94] pod "kube-controller-manager-embed-certs-175371" is "Ready"
	I1018 12:19:14.657367  319485 pod_ready.go:86] duration metric: took 187.044696ms for pod "kube-controller-manager-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.856446  319485 pod_ready.go:83] waiting for pod "kube-proxy-t2x4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.257025  319485 pod_ready.go:94] pod "kube-proxy-t2x4c" is "Ready"
	I1018 12:19:15.257053  319485 pod_ready.go:86] duration metric: took 400.581639ms for pod "kube-proxy-t2x4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.456953  319485 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.893038  326490 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 12:19:15.893090  326490 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 12:19:15.893217  326490 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 12:19:15.893353  326490 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 12:19:15.893498  326490 kubeadm.go:318] OS: Linux
	I1018 12:19:15.893566  326490 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 12:19:15.893627  326490 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 12:19:15.893696  326490 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 12:19:15.893776  326490 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 12:19:15.893850  326490 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 12:19:15.893910  326490 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 12:19:15.893969  326490 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 12:19:15.894035  326490 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 12:19:15.894133  326490 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 12:19:15.894281  326490 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 12:19:15.894412  326490 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 12:19:15.894516  326490 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 12:19:15.896254  326490 out.go:252]   - Generating certificates and keys ...
	I1018 12:19:15.896337  326490 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 12:19:15.896412  326490 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 12:19:15.896489  326490 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 12:19:15.896543  326490 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:19:15.896599  326490 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:19:15.896657  326490 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:19:15.896708  326490 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:19:15.896861  326490 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-579606] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 12:19:15.896916  326490 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:19:15.897021  326490 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-579606] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 12:19:15.897080  326490 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:19:15.897134  326490 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:19:15.897176  326490 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:19:15.897227  326490 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:19:15.897280  326490 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:19:15.897332  326490 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:19:15.897378  326490 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:19:15.897435  326490 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:19:15.897486  326490 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:19:15.897560  326490 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:19:15.897622  326490 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 12:19:15.899813  326490 out.go:252]   - Booting up control plane ...
	I1018 12:19:15.899904  326490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:19:15.899977  326490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:19:15.900053  326490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:19:15.900169  326490 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:19:15.900307  326490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:19:15.900475  326490 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:19:15.900586  326490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:19:15.900647  326490 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:19:15.900835  326490 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:19:15.900980  326490 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:19:15.901059  326490 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501237256s
	I1018 12:19:15.901160  326490 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:19:15.901257  326490 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 12:19:15.901388  326490 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:19:15.901499  326490 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:19:15.901562  326490 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.520322183s
	I1018 12:19:15.901615  326490 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.051874304s
	I1018 12:19:15.901668  326490 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001667177s
	I1018 12:19:15.901817  326490 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:19:15.902084  326490 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:19:15.902160  326490 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:19:15.902393  326490 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-579606 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:19:15.902484  326490 kubeadm.go:318] [bootstrap-token] Using token: pmkr01.67na6m3iuf7b6wke
	I1018 12:19:15.904615  326490 out.go:252]   - Configuring RBAC rules ...
	I1018 12:19:15.904796  326490 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 12:19:15.904875  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 12:19:15.905028  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 12:19:15.905156  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 12:19:15.905290  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 12:19:15.905391  326490 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 12:19:15.905553  326490 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 12:19:15.905613  326490 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 12:19:15.905676  326490 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 12:19:15.905684  326490 kubeadm.go:318] 
	I1018 12:19:15.905730  326490 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 12:19:15.905736  326490 kubeadm.go:318] 
	I1018 12:19:15.905836  326490 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 12:19:15.905852  326490 kubeadm.go:318] 
	I1018 12:19:15.905891  326490 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 12:19:15.905967  326490 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 12:19:15.906032  326490 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 12:19:15.906040  326490 kubeadm.go:318] 
	I1018 12:19:15.906120  326490 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 12:19:15.906130  326490 kubeadm.go:318] 
	I1018 12:19:15.906195  326490 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 12:19:15.906216  326490 kubeadm.go:318] 
	I1018 12:19:15.906289  326490 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 12:19:15.906393  326490 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 12:19:15.906490  326490 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 12:19:15.906500  326490 kubeadm.go:318] 
	I1018 12:19:15.906596  326490 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 12:19:15.906826  326490 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 12:19:15.906844  326490 kubeadm.go:318] 
	I1018 12:19:15.906936  326490 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token pmkr01.67na6m3iuf7b6wke \
	I1018 12:19:15.907119  326490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d \
	I1018 12:19:15.907164  326490 kubeadm.go:318] 	--control-plane 
	I1018 12:19:15.907173  326490 kubeadm.go:318] 
	I1018 12:19:15.907323  326490 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 12:19:15.907337  326490 kubeadm.go:318] 
	I1018 12:19:15.907436  326490 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token pmkr01.67na6m3iuf7b6wke \
	I1018 12:19:15.907606  326490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d 
	I1018 12:19:15.907623  326490 cni.go:84] Creating CNI manager for ""
	I1018 12:19:15.907632  326490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:19:15.857063  319485 pod_ready.go:94] pod "kube-scheduler-embed-certs-175371" is "Ready"
	I1018 12:19:15.857091  319485 pod_ready.go:86] duration metric: took 400.110605ms for pod "kube-scheduler-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.857103  319485 pod_ready.go:40] duration metric: took 32.907623738s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:19:15.908233  319485 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:15.909420  326490 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 12:19:15.910368  319485 out.go:179] * Done! kubectl is now configured to use "embed-certs-175371" cluster and "default" namespace by default
	I1018 12:19:15.911428  326490 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 12:19:15.916203  326490 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 12:19:15.916223  326490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 12:19:15.930716  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 12:19:16.186811  326490 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:19:16.186877  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:16.186927  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-579606 minikube.k8s.io/updated_at=2025_10_18T12_19_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=newest-cni-579606 minikube.k8s.io/primary=true
	I1018 12:19:16.200483  326490 ops.go:34] apiserver oom_adj: -16
	I1018 12:19:16.289962  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:16.790297  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:17.290815  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:17.790675  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:18.290971  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:18.791051  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:19.291007  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:19.790041  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:20.290948  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:20.364194  326490 kubeadm.go:1113] duration metric: took 4.177366872s to wait for elevateKubeSystemPrivileges
	I1018 12:19:20.364236  326490 kubeadm.go:402] duration metric: took 15.569226889s to StartCluster
	I1018 12:19:20.364257  326490 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:20.364341  326490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:19:20.366539  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:20.366808  326490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 12:19:20.366823  326490 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:19:20.366886  326490 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:19:20.366978  326490 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-579606"
	I1018 12:19:20.366998  326490 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-579606"
	I1018 12:19:20.367029  326490 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:20.367046  326490 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:20.367047  326490 addons.go:69] Setting default-storageclass=true in profile "newest-cni-579606"
	I1018 12:19:20.367088  326490 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-579606"
	I1018 12:19:20.367465  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:20.367552  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:20.368575  326490 out.go:179] * Verifying Kubernetes components...
	I1018 12:19:20.370326  326490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:20.394477  326490 addons.go:238] Setting addon default-storageclass=true in "newest-cni-579606"
	I1018 12:19:20.394522  326490 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:20.394869  326490 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:19:20.395017  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:20.396676  326490 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:19:20.396702  326490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:19:20.396772  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:20.423305  326490 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:19:20.423405  326490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:19:20.423499  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:20.423817  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:20.453744  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:20.465106  326490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 12:19:20.532388  326490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:19:20.546306  326490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:19:20.568683  326490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:19:20.669063  326490 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 12:19:20.670556  326490 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:19:20.670609  326490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:19:20.899558  326490 api_server.go:72] duration metric: took 532.701277ms to wait for apiserver process to appear ...
	I1018 12:19:20.899596  326490 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:19:20.899623  326490 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:20.906703  326490 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 12:19:20.907612  326490 api_server.go:141] control plane version: v1.34.1
	I1018 12:19:20.907641  326490 api_server.go:131] duration metric: took 8.037799ms to wait for apiserver health ...
	I1018 12:19:20.907652  326490 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:19:20.909941  326490 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 12:19:20.911175  326490 addons.go:514] duration metric: took 544.288646ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 12:19:20.911194  326490 system_pods.go:59] 8 kube-system pods found
	I1018 12:19:20.911217  326490 system_pods.go:61] "coredns-66bc5c9577-p6bts" [49609244-6dc2-4950-8fad-8240b827ecca] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 12:19:20.911224  326490 system_pods.go:61] "etcd-newest-cni-579606" [496c00b4-7ad1-40c0-a440-c396a752cbf4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:19:20.911231  326490 system_pods.go:61] "kindnet-2c4t6" [08c0018d-0f0f-435e-8868-31818d5639fa] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:19:20.911238  326490 system_pods.go:61] "kube-apiserver-newest-cni-579606" [a39961c7-019e-41ec-8843-e98e9c2e3604] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:19:20.911249  326490 system_pods.go:61] "kube-controller-manager-newest-cni-579606" [992bd82d-6489-43da-83ba-8dcb6b86fe48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:19:20.911262  326490 system_pods.go:61] "kube-proxy-5hjgn" [915df613-23ce-49e2-b125-d223024077b0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:19:20.911291  326490 system_pods.go:61] "kube-scheduler-newest-cni-579606" [2a1de39e-4fa6-49e8-a420-75a6c82ac73e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:19:20.911306  326490 system_pods.go:61] "storage-provisioner" [c7ff4c04-56e5-469b-9af2-dc1bf4fe969d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 12:19:20.911314  326490 system_pods.go:74] duration metric: took 3.655766ms to wait for pod list to return data ...
	I1018 12:19:20.911324  326490 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:19:20.913681  326490 default_sa.go:45] found service account: "default"
	I1018 12:19:20.913702  326490 default_sa.go:55] duration metric: took 2.371901ms for default service account to be created ...
	I1018 12:19:20.913712  326490 kubeadm.go:586] duration metric: took 546.861004ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 12:19:20.913730  326490 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:19:20.916084  326490 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:19:20.916105  326490 node_conditions.go:123] node cpu capacity is 8
	I1018 12:19:20.916117  326490 node_conditions.go:105] duration metric: took 2.382506ms to run NodePressure ...
	I1018 12:19:20.916128  326490 start.go:241] waiting for startup goroutines ...
	I1018 12:19:21.173827  326490 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-579606" context rescaled to 1 replicas
	I1018 12:19:21.173870  326490 start.go:246] waiting for cluster config update ...
	I1018 12:19:21.173882  326490 start.go:255] writing updated cluster config ...
	I1018 12:19:21.174193  326490 ssh_runner.go:195] Run: rm -f paused
	I1018 12:19:21.223166  326490 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:21.225317  326490 out.go:179] * Done! kubectl is now configured to use "newest-cni-579606" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 12:18:39 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:39.57108058Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 12:18:39 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:39.779057419Z" level=info msg="Removing container: 0dc9ec88678ebd70c0850aeb79412ea4470360e0cfcd0a1f70b1429ae6644963" id=9a7b6e65-c021-4dd2-a7c8-24357f84f8c6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:18:39 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:39.793295763Z" level=info msg="Removed container 0dc9ec88678ebd70c0850aeb79412ea4470360e0cfcd0a1f70b1429ae6644963: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6/dashboard-metrics-scraper" id=9a7b6e65-c021-4dd2-a7c8-24357f84f8c6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.709550204Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7ab41b1e-9ddb-4954-82ab-3778cac993d6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.713025013Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=895c0ccb-22bd-413a-a66e-e5dc0445b3b5 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.716577468Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6/dashboard-metrics-scraper" id=12898ce6-b9f8-4bb4-8daf-6810e70845ae name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.719104037Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.728278528Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.728960268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.766546895Z" level=info msg="Created container 6ef023ef21b14bff971ec47fc55a7ec6c3d7bcc299038c2b4624ba8d4e33f5d2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6/dashboard-metrics-scraper" id=12898ce6-b9f8-4bb4-8daf-6810e70845ae name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.767261324Z" level=info msg="Starting container: 6ef023ef21b14bff971ec47fc55a7ec6c3d7bcc299038c2b4624ba8d4e33f5d2" id=b29e4fc0-7cc4-4bf3-aac4-c8e6935302ed name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.769680437Z" level=info msg="Started container" PID=1775 containerID=6ef023ef21b14bff971ec47fc55a7ec6c3d7bcc299038c2b4624ba8d4e33f5d2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6/dashboard-metrics-scraper id=b29e4fc0-7cc4-4bf3-aac4-c8e6935302ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=d813324b7a87994aebddb320d998d445925afdb7cec91d6a467aa9ee8202f79c
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.826246681Z" level=info msg="Removing container: 6b9479e8ac443821a49c0d64515fcf19468741bbf01754cab327588eca64ac9c" id=078d2263-0627-484b-9b6b-eebdc95fb449 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.83738487Z" level=info msg="Removed container 6b9479e8ac443821a49c0d64515fcf19468741bbf01754cab327588eca64ac9c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6/dashboard-metrics-scraper" id=078d2263-0627-484b-9b6b-eebdc95fb449 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:18:59 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:59.842449416Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a2437e6a-0e71-4a94-86c9-d3e8f5d2812f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:59 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:59.938274156Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e2a7fe16-e512-4c93-a952-5d2945272074 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:59 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:59.961071175Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7aeaac60-e4fd-4a3b-8878-0cdb348d2cc3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:59 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:59.961402227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:00 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:19:00.098170721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:00 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:19:00.098322806Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5a093b1a960020e8b1243dad9604b3824b6eaf08228cfc1d62dbf4062cd5f465/merged/etc/passwd: no such file or directory"
	Oct 18 12:19:00 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:19:00.098346358Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5a093b1a960020e8b1243dad9604b3824b6eaf08228cfc1d62dbf4062cd5f465/merged/etc/group: no such file or directory"
	Oct 18 12:19:00 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:19:00.099480692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:00 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:19:00.126906625Z" level=info msg="Created container 7badc800fa4039e5ced42d3de7cb9486ff1368bed00b2093776a0935921d9a3d: kube-system/storage-provisioner/storage-provisioner" id=7aeaac60-e4fd-4a3b-8878-0cdb348d2cc3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:00 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:19:00.127801407Z" level=info msg="Starting container: 7badc800fa4039e5ced42d3de7cb9486ff1368bed00b2093776a0935921d9a3d" id=b7a0cacf-7e6f-44df-8834-112a2c33f171 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:19:00 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:19:00.129879368Z" level=info msg="Started container" PID=1789 containerID=7badc800fa4039e5ced42d3de7cb9486ff1368bed00b2093776a0935921d9a3d description=kube-system/storage-provisioner/storage-provisioner id=b7a0cacf-7e6f-44df-8834-112a2c33f171 name=/runtime.v1.RuntimeService/StartContainer sandboxID=65e4b9b67d10b51a01e0df6de82304a1bf98eec7ec885b2e85ebe735e7a60358
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	7badc800fa403       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   65e4b9b67d10b       storage-provisioner                                    kube-system
	6ef023ef21b14       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   d813324b7a879       dashboard-metrics-scraper-6ffb444bf9-tq7v6             kubernetes-dashboard
	4b69327aa0d0a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   0d906b90aa6bd       kubernetes-dashboard-855c9754f9-lmkc8                  kubernetes-dashboard
	3a791b10f6b72       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   83c1e5ead4a6e       coredns-66bc5c9577-7qgqj                               kube-system
	030516fe569e1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   c17889afe31a4       busybox                                                default
	3d8531f8819a1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   a291fe8320284       kube-proxy-bffkr                                       kube-system
	beda0d0ad2456       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   1050ac19a66bb       kindnet-hbfgg                                          kube-system
	134c68115df40       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   65e4b9b67d10b       storage-provisioner                                    kube-system
	47b0a89c606a2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   49e6226018b07       kube-apiserver-default-k8s-diff-port-028309            kube-system
	98cd3ecd97b52       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   4c1e3a255496d       kube-scheduler-default-k8s-diff-port-028309            kube-system
	b4e6ed35e6415       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   2a56df8397d44       kube-controller-manager-default-k8s-diff-port-028309   kube-system
	7f679fa5b11a9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   c7991f4db00c1       etcd-default-k8s-diff-port-028309                      kube-system
	
	
	==> coredns [3a791b10f6b7292113c4ab4334268fa9103739de78ecf9577cda655bc7e04ad8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57614 - 60558 "HINFO IN 388194415275680658.1841293297904610492. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.049800991s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-028309
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-028309
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=default-k8s-diff-port-028309
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_17_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:17:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-028309
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:19:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:19:19 +0000   Sat, 18 Oct 2025 12:17:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:19:19 +0000   Sat, 18 Oct 2025 12:17:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:19:19 +0000   Sat, 18 Oct 2025 12:17:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:19:19 +0000   Sat, 18 Oct 2025 12:17:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-028309
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                ff570318-6181-45ed-80f8-45dccb2d1794
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-7qgqj                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-default-k8s-diff-port-028309                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-hbfgg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-default-k8s-diff-port-028309             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-028309    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-bffkr                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-default-k8s-diff-port-028309             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-tq7v6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lmkc8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 118s)  kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    113s                 kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  113s                 kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     113s                 kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node default-k8s-diff-port-028309 event: Registered Node default-k8s-diff-port-028309 in Controller
	  Normal  NodeReady                96s                  kubelet          Node default-k8s-diff-port-028309 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)    kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)    kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                  node-controller  Node default-k8s-diff-port-028309 event: Registered Node default-k8s-diff-port-028309 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +11.948953] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 93 07 de 40 6d 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 2f a5 3a 37 fc 08 06
	[  +0.204454] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 4b 47 1f ce e5 08 06
	[Oct18 12:16] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 88 62 1b dd a7 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 f1 aa 42 b3 1d 08 06
	[  +0.000901] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +26.035563] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 9e 15 3f 0e e1 08 06
	[  +0.000631] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 55 46 ae a1 7f 08 06
	[  +2.492998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 63 10 7e 7b f1 08 06
	[  +0.001695] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	[ +18.118461] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e eb 77 72 c6 18 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	
	
	==> etcd [7f679fa5b11a9e7c241aa782944e0a63d28817b54b5a1f2424c606492f4167fd] <==
	{"level":"warn","ts":"2025-10-18T12:18:27.625164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.631838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.638345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.644919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.651337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.659141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.666430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.675965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.684290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.693888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.702174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.710966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.718477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.727259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.734945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.741567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.755862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.762641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.778188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.785221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.791838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.842986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57972","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:18:58.801538Z","caller":"traceutil/trace.go:172","msg":"trace[1821153281] transaction","detail":"{read_only:false; response_revision:654; number_of_response:1; }","duration":"123.719441ms","start":"2025-10-18T12:18:58.677795Z","end":"2025-10-18T12:18:58.801514Z","steps":["trace[1821153281] 'process raft request'  (duration: 123.587121ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:18:59.693798Z","caller":"traceutil/trace.go:172","msg":"trace[201754330] transaction","detail":"{read_only:false; response_revision:657; number_of_response:1; }","duration":"142.413308ms","start":"2025-10-18T12:18:59.551337Z","end":"2025-10-18T12:18:59.693751Z","steps":["trace[201754330] 'process raft request'  (duration: 128.118927ms)","trace[201754330] 'compare'  (duration: 14.174445ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T12:19:00.098886Z","caller":"traceutil/trace.go:172","msg":"trace[480506682] transaction","detail":"{read_only:false; response_revision:659; number_of_response:1; }","duration":"249.597908ms","start":"2025-10-18T12:18:59.849269Z","end":"2025-10-18T12:19:00.098867Z","steps":["trace[480506682] 'process raft request'  (duration: 249.456601ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:19:23 up  1:01,  0 user,  load average: 3.21, 3.86, 2.61
	Linux default-k8s-diff-port-028309 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [beda0d0ad2456588c42c64e748d9c9a3a59ec5a890826c601cd42d1a48c80717] <==
	I1018 12:18:29.334277       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:18:29.334615       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 12:18:29.334848       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:18:29.334869       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:18:29.334890       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:18:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:18:29.537834       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:18:29.634176       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:18:29.634323       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:18:29.634627       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:18:30.034513       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:18:30.034549       1 metrics.go:72] Registering metrics
	I1018 12:18:30.034624       1 controller.go:711] "Syncing nftables rules"
	I1018 12:18:39.537948       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 12:18:39.538049       1 main.go:301] handling current node
	I1018 12:18:49.544854       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 12:18:49.544904       1 main.go:301] handling current node
	I1018 12:18:59.537882       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 12:18:59.537943       1 main.go:301] handling current node
	I1018 12:19:09.539198       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 12:19:09.539282       1 main.go:301] handling current node
	I1018 12:19:19.537491       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 12:19:19.537534       1 main.go:301] handling current node
	
	
	==> kube-apiserver [47b0a89c606a2ed0c69b3d57a1254c989803ac5ff1e9913ca52c6c7b7c451aa9] <==
	I1018 12:18:28.316899       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 12:18:28.316604       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 12:18:28.317526       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 12:18:28.317570       1 aggregator.go:171] initial CRD sync complete...
	I1018 12:18:28.317579       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 12:18:28.317584       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 12:18:28.317590       1 cache.go:39] Caches are synced for autoregister controller
	I1018 12:18:28.318702       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 12:18:28.318724       1 policy_source.go:240] refreshing policies
	I1018 12:18:28.321425       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 12:18:28.325916       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 12:18:28.358297       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:18:28.369161       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:18:28.569663       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:18:28.600568       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:18:28.625500       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:18:28.643492       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:18:28.653205       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:18:28.694506       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.27.82"}
	I1018 12:18:28.706927       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.163.242"}
	I1018 12:18:29.219799       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:18:31.698146       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:18:32.049505       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:18:32.200433       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:18:32.200433       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [b4e6ed35e6415d74f156e6f9b2caf8f4eee3580d9a2b0e69aa0489217f5ecff8] <==
	I1018 12:18:31.613033       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:18:31.616248       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 12:18:31.619548       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 12:18:31.624845       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 12:18:31.628134       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 12:18:31.645533       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 12:18:31.645548       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 12:18:31.645651       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 12:18:31.645676       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:18:31.645695       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:18:31.645710       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:18:31.645856       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 12:18:31.646218       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 12:18:31.646299       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 12:18:31.646303       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 12:18:31.646592       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 12:18:31.648052       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 12:18:31.648906       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 12:18:31.649141       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 12:18:31.649317       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 12:18:31.649340       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:18:31.650975       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 12:18:31.654513       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:18:31.664841       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:18:31.669277       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [3d8531f8819a155bae8f5276bec64b4d55f23d29586c6dc59ecee2e01d0eac4c] <==
	I1018 12:18:29.105755       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:18:29.162636       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:18:29.263297       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:18:29.263353       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1018 12:18:29.263511       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:18:29.286860       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:18:29.286924       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:18:29.293424       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:18:29.294062       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:18:29.294103       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:18:29.295590       1 config.go:200] "Starting service config controller"
	I1018 12:18:29.295612       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:18:29.295817       1 config.go:309] "Starting node config controller"
	I1018 12:18:29.295874       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:18:29.295886       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:18:29.296089       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:18:29.296096       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:18:29.296133       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:18:29.296151       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:18:29.395832       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:18:29.397041       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:18:29.397081       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [98cd3ecd97b52b4667430825deaaf5b42f0481bce7f80bdb63cc7d18be3f2c43] <==
	I1018 12:18:26.620180       1 serving.go:386] Generated self-signed cert in-memory
	W1018 12:18:28.227517       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 12:18:28.227555       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 12:18:28.227567       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 12:18:28.227576       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 12:18:28.286125       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 12:18:28.286159       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:18:28.289098       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:18:28.289136       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:18:28.290191       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:18:28.290272       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 12:18:28.389534       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:18:37 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:37.042185     721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 12:18:37 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:37.721455     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lmkc8" podStartSLOduration=2.921802052 podStartE2EDuration="5.721425861s" podCreationTimestamp="2025-10-18 12:18:32 +0000 UTC" firstStartedPulling="2025-10-18 12:18:32.503369237 +0000 UTC m=+6.883742991" lastFinishedPulling="2025-10-18 12:18:35.302993046 +0000 UTC m=+9.683366800" observedRunningTime="2025-10-18 12:18:35.77453946 +0000 UTC m=+10.154913245" watchObservedRunningTime="2025-10-18 12:18:37.721425861 +0000 UTC m=+12.101799633"
	Oct 18 12:18:38 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:38.608943     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6" podStartSLOduration=1.423096885 podStartE2EDuration="6.608919707s" podCreationTimestamp="2025-10-18 12:18:32 +0000 UTC" firstStartedPulling="2025-10-18 12:18:32.503549139 +0000 UTC m=+6.883922903" lastFinishedPulling="2025-10-18 12:18:37.689371973 +0000 UTC m=+12.069745725" observedRunningTime="2025-10-18 12:18:37.776448308 +0000 UTC m=+12.156822080" watchObservedRunningTime="2025-10-18 12:18:38.608919707 +0000 UTC m=+12.989293479"
	Oct 18 12:18:38 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:38.768400     721 scope.go:117] "RemoveContainer" containerID="0dc9ec88678ebd70c0850aeb79412ea4470360e0cfcd0a1f70b1429ae6644963"
	Oct 18 12:18:39 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:39.774803     721 scope.go:117] "RemoveContainer" containerID="0dc9ec88678ebd70c0850aeb79412ea4470360e0cfcd0a1f70b1429ae6644963"
	Oct 18 12:18:39 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:39.775181     721 scope.go:117] "RemoveContainer" containerID="6b9479e8ac443821a49c0d64515fcf19468741bbf01754cab327588eca64ac9c"
	Oct 18 12:18:39 default-k8s-diff-port-028309 kubelet[721]: E1018 12:18:39.775354     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tq7v6_kubernetes-dashboard(71b0408d-e77e-48df-8889-7483cda6310e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6" podUID="71b0408d-e77e-48df-8889-7483cda6310e"
	Oct 18 12:18:40 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:40.779330     721 scope.go:117] "RemoveContainer" containerID="6b9479e8ac443821a49c0d64515fcf19468741bbf01754cab327588eca64ac9c"
	Oct 18 12:18:40 default-k8s-diff-port-028309 kubelet[721]: E1018 12:18:40.779564     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tq7v6_kubernetes-dashboard(71b0408d-e77e-48df-8889-7483cda6310e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6" podUID="71b0408d-e77e-48df-8889-7483cda6310e"
	Oct 18 12:18:41 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:41.782254     721 scope.go:117] "RemoveContainer" containerID="6b9479e8ac443821a49c0d64515fcf19468741bbf01754cab327588eca64ac9c"
	Oct 18 12:18:41 default-k8s-diff-port-028309 kubelet[721]: E1018 12:18:41.782479     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tq7v6_kubernetes-dashboard(71b0408d-e77e-48df-8889-7483cda6310e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6" podUID="71b0408d-e77e-48df-8889-7483cda6310e"
	Oct 18 12:18:54 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:54.708869     721 scope.go:117] "RemoveContainer" containerID="6b9479e8ac443821a49c0d64515fcf19468741bbf01754cab327588eca64ac9c"
	Oct 18 12:18:54 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:54.823741     721 scope.go:117] "RemoveContainer" containerID="6b9479e8ac443821a49c0d64515fcf19468741bbf01754cab327588eca64ac9c"
	Oct 18 12:18:54 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:54.824034     721 scope.go:117] "RemoveContainer" containerID="6ef023ef21b14bff971ec47fc55a7ec6c3d7bcc299038c2b4624ba8d4e33f5d2"
	Oct 18 12:18:54 default-k8s-diff-port-028309 kubelet[721]: E1018 12:18:54.824249     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tq7v6_kubernetes-dashboard(71b0408d-e77e-48df-8889-7483cda6310e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6" podUID="71b0408d-e77e-48df-8889-7483cda6310e"
	Oct 18 12:18:59 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:59.841934     721 scope.go:117] "RemoveContainer" containerID="134c68115df400299f718a242dcc3487786865366d4c86ae9057813ce2261cb7"
	Oct 18 12:19:01 default-k8s-diff-port-028309 kubelet[721]: I1018 12:19:01.768803     721 scope.go:117] "RemoveContainer" containerID="6ef023ef21b14bff971ec47fc55a7ec6c3d7bcc299038c2b4624ba8d4e33f5d2"
	Oct 18 12:19:01 default-k8s-diff-port-028309 kubelet[721]: E1018 12:19:01.769005     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tq7v6_kubernetes-dashboard(71b0408d-e77e-48df-8889-7483cda6310e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6" podUID="71b0408d-e77e-48df-8889-7483cda6310e"
	Oct 18 12:19:13 default-k8s-diff-port-028309 kubelet[721]: I1018 12:19:13.709218     721 scope.go:117] "RemoveContainer" containerID="6ef023ef21b14bff971ec47fc55a7ec6c3d7bcc299038c2b4624ba8d4e33f5d2"
	Oct 18 12:19:13 default-k8s-diff-port-028309 kubelet[721]: E1018 12:19:13.709478     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tq7v6_kubernetes-dashboard(71b0408d-e77e-48df-8889-7483cda6310e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6" podUID="71b0408d-e77e-48df-8889-7483cda6310e"
	Oct 18 12:19:20 default-k8s-diff-port-028309 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 12:19:20 default-k8s-diff-port-028309 kubelet[721]: I1018 12:19:20.986650     721 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 18 12:19:21 default-k8s-diff-port-028309 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 12:19:21 default-k8s-diff-port-028309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 12:19:21 default-k8s-diff-port-028309 systemd[1]: kubelet.service: Consumed 1.879s CPU time.
	
	
	==> kubernetes-dashboard [4b69327aa0d0a64fdafbee660e64555b3ddd443d95b2e8615a545e1a1776ef12] <==
	2025/10/18 12:18:35 Using namespace: kubernetes-dashboard
	2025/10/18 12:18:35 Using in-cluster config to connect to apiserver
	2025/10/18 12:18:35 Using secret token for csrf signing
	2025/10/18 12:18:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 12:18:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 12:18:35 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 12:18:35 Generating JWE encryption key
	2025/10/18 12:18:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 12:18:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 12:18:35 Initializing JWE encryption key from synchronized object
	2025/10/18 12:18:35 Creating in-cluster Sidecar client
	2025/10/18 12:18:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 12:18:35 Serving insecurely on HTTP port: 9090
	2025/10/18 12:19:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 12:18:35 Starting overwatch
	
	
	==> storage-provisioner [134c68115df400299f718a242dcc3487786865366d4c86ae9057813ce2261cb7] <==
	I1018 12:18:29.070585       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 12:18:59.075248       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7badc800fa4039e5ced42d3de7cb9486ff1368bed00b2093776a0935921d9a3d] <==
	I1018 12:19:00.144568       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 12:19:00.154271       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 12:19:00.154325       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 12:19:00.157908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:03.613477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:07.874120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:11.472272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:14.526939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:17.549440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:17.554017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:19:17.554204       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 12:19:17.554289       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b5d62124-6ee2-44d3-a6fa-ae6c6c57818d", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-028309_0d9d13a4-48ec-4a17-97e6-cc2f1b28adb6 became leader
	I1018 12:19:17.554358       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-028309_0d9d13a4-48ec-4a17-97e6-cc2f1b28adb6!
	W1018 12:19:17.557072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:17.560031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:19:17.654778       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-028309_0d9d13a4-48ec-4a17-97e6-cc2f1b28adb6!
	W1018 12:19:19.563816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:19.568797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:21.573489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:21.578679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:23.582797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:23.587976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-028309 -n default-k8s-diff-port-028309
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-028309 -n default-k8s-diff-port-028309: exit status 2 (316.55029ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-028309 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-028309
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-028309:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63",
	        "Created": "2025-10-18T12:17:15.571662487Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 317387,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:18:19.407164276Z",
	            "FinishedAt": "2025-10-18T12:18:18.13601315Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63/hostname",
	        "HostsPath": "/var/lib/docker/containers/189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63/hosts",
	        "LogPath": "/var/lib/docker/containers/189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63/189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63-json.log",
	        "Name": "/default-k8s-diff-port-028309",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-028309:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-028309",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "189b5ecbc2d40e112a4b40288e8ec8a54b8916e651646ccaf38bfa0f65c90a63",
	                "LowerDir": "/var/lib/docker/overlay2/7c3ff02d9edfcdd2a7ea282d3d34f3f417c0e8e17e7349aa6c54d520ceea71c4-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c3ff02d9edfcdd2a7ea282d3d34f3f417c0e8e17e7349aa6c54d520ceea71c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c3ff02d9edfcdd2a7ea282d3d34f3f417c0e8e17e7349aa6c54d520ceea71c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c3ff02d9edfcdd2a7ea282d3d34f3f417c0e8e17e7349aa6c54d520ceea71c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-028309",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-028309/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-028309",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-028309",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-028309",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4b29c45c1b504a92c3379b04b101fa55c150bbd5c02cebe4a911ac749596a940",
	            "SandboxKey": "/var/run/docker/netns/4b29c45c1b50",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-028309": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:9d:52:e1:5f:54",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9cb7bc9061ba59e01198e7ea5f6cf6ddd6ba962ca18f957a0fbcc8a6c5eef0e9",
	                    "EndpointID": "78ebf6fc33e2ba48861b9301ad856c0de86acd8c360167e19e3a99e7ec528de6",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-028309",
	                        "189b5ecbc2d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-028309 -n default-k8s-diff-port-028309
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-028309 -n default-k8s-diff-port-028309: exit status 2 (306.766909ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-028309 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-028309 logs -n 25: (1.157390576s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-024443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p old-k8s-version-024443 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable dashboard -p no-preload-406541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p no-preload-406541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-028309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-028309 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-175371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ stop    │ -p embed-certs-175371 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-028309 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-175371 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p embed-certs-175371 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ image   │ no-preload-406541 image list --format=json                                                                                                                                                                                                    │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p no-preload-406541 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ image   │ old-k8s-version-024443 image list --format=json                                                                                                                                                                                               │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p old-k8s-version-024443 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ delete  │ -p no-preload-406541                                                                                                                                                                                                                          │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ delete  │ -p old-k8s-version-024443                                                                                                                                                                                                                     │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ delete  │ -p old-k8s-version-024443                                                                                                                                                                                                                     │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p newest-cni-579606 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ delete  │ -p no-preload-406541                                                                                                                                                                                                                          │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ default-k8s-diff-port-028309 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ pause   │ -p default-k8s-diff-port-028309 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-579606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ stop    │ -p newest-cni-579606 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:18:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:18:54.845878  326490 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:18:54.846118  326490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:54.846127  326490 out.go:374] Setting ErrFile to fd 2...
	I1018 12:18:54.846131  326490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:54.846326  326490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:18:54.846865  326490 out.go:368] Setting JSON to false
	I1018 12:18:54.848113  326490 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3683,"bootTime":1760786252,"procs":381,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:18:54.848206  326490 start.go:141] virtualization: kvm guest
	I1018 12:18:54.851418  326490 out.go:179] * [newest-cni-579606] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:18:54.856390  326490 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:18:54.856377  326490 notify.go:220] Checking for updates...
	I1018 12:18:54.857910  326490 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:18:54.859215  326490 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:54.860446  326490 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:18:54.861847  326490 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:18:54.863137  326490 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:18:54.864900  326490 config.go:182] Loaded profile config "default-k8s-diff-port-028309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:54.864984  326490 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:54.865092  326490 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:18:54.888492  326490 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:18:54.888598  326490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:54.953711  326490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 12:18:54.941671438 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:54.953923  326490 docker.go:318] overlay module found
	I1018 12:18:54.958794  326490 out.go:179] * Using the docker driver based on user configuration
	I1018 12:18:54.960013  326490 start.go:305] selected driver: docker
	I1018 12:18:54.960033  326490 start.go:925] validating driver "docker" against <nil>
	I1018 12:18:54.960046  326490 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:18:54.960615  326490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:55.022513  326490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 12:18:55.011731081 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:55.022798  326490 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 12:18:55.022840  326490 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 12:18:55.023141  326490 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 12:18:55.025322  326490 out.go:179] * Using Docker driver with root privileges
	I1018 12:18:55.026401  326490 cni.go:84] Creating CNI manager for ""
	I1018 12:18:55.026484  326490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:18:55.026498  326490 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:18:55.026560  326490 start.go:349] cluster config:
	{Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:55.027938  326490 out.go:179] * Starting "newest-cni-579606" primary control-plane node in "newest-cni-579606" cluster
	I1018 12:18:55.029100  326490 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:18:55.030360  326490 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:18:55.031422  326490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:55.031468  326490 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 12:18:55.031489  326490 cache.go:58] Caching tarball of preloaded images
	I1018 12:18:55.031522  326490 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:18:55.031591  326490 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:18:55.031603  326490 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:18:55.031705  326490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json ...
	I1018 12:18:55.031726  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json: {Name:mk20e362fc30401f09fc034ac5a55088adce3cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:55.053307  326490 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:18:55.053326  326490 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:18:55.053342  326490 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:18:55.053373  326490 start.go:360] acquireMachinesLock for newest-cni-579606: {Name:mk4161cf0bf2eb93a8110dc388332ec9ca8fc5ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:18:55.053467  326490 start.go:364] duration metric: took 78.123µs to acquireMachinesLock for "newest-cni-579606"
	I1018 12:18:55.053489  326490 start.go:93] Provisioning new machine with config: &{Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:18:55.053550  326490 start.go:125] createHost starting for "" (driver="docker")
	W1018 12:18:51.958241  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:18:53.959108  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:18:55.846032  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:58.346225  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:55.055345  326490 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 12:18:55.055547  326490 start.go:159] libmachine.API.Create for "newest-cni-579606" (driver="docker")
	I1018 12:18:55.055575  326490 client.go:168] LocalClient.Create starting
	I1018 12:18:55.055636  326490 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem
	I1018 12:18:55.055669  326490 main.go:141] libmachine: Decoding PEM data...
	I1018 12:18:55.055683  326490 main.go:141] libmachine: Parsing certificate...
	I1018 12:18:55.055736  326490 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem
	I1018 12:18:55.055773  326490 main.go:141] libmachine: Decoding PEM data...
	I1018 12:18:55.055796  326490 main.go:141] libmachine: Parsing certificate...
	I1018 12:18:55.056153  326490 cli_runner.go:164] Run: docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 12:18:55.073803  326490 cli_runner.go:211] docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 12:18:55.073868  326490 network_create.go:284] running [docker network inspect newest-cni-579606] to gather additional debugging logs...
	I1018 12:18:55.073887  326490 cli_runner.go:164] Run: docker network inspect newest-cni-579606
	W1018 12:18:55.092574  326490 cli_runner.go:211] docker network inspect newest-cni-579606 returned with exit code 1
	I1018 12:18:55.092605  326490 network_create.go:287] error running [docker network inspect newest-cni-579606]: docker network inspect newest-cni-579606: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-579606 not found
	I1018 12:18:55.092623  326490 network_create.go:289] output of [docker network inspect newest-cni-579606]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-579606 not found
	
	** /stderr **
	I1018 12:18:55.092788  326490 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:18:55.111259  326490 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1c78aef7d2ee IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:19:5a:10:36:f4} reservation:<nil>}
	I1018 12:18:55.111908  326490 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6069a4ec9777 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:f7:2a:6b:48:b9} reservation:<nil>}
	I1018 12:18:55.112751  326490 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-670e794a7c9f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:d0:78:df:c7:fd} reservation:<nil>}
	I1018 12:18:55.113423  326490 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8bb34d522296 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:fc:1a:65:23:03} reservation:<nil>}
	I1018 12:18:55.114281  326490 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc7b00}
	I1018 12:18:55.114303  326490 network_create.go:124] attempt to create docker network newest-cni-579606 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 12:18:55.114345  326490 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-579606 newest-cni-579606
	I1018 12:18:55.175643  326490 network_create.go:108] docker network newest-cni-579606 192.168.85.0/24 created
	I1018 12:18:55.175691  326490 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-579606" container
	I1018 12:18:55.175752  326490 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 12:18:55.193582  326490 cli_runner.go:164] Run: docker volume create newest-cni-579606 --label name.minikube.sigs.k8s.io=newest-cni-579606 --label created_by.minikube.sigs.k8s.io=true
	I1018 12:18:55.212499  326490 oci.go:103] Successfully created a docker volume newest-cni-579606
	I1018 12:18:55.212595  326490 cli_runner.go:164] Run: docker run --rm --name newest-cni-579606-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-579606 --entrypoint /usr/bin/test -v newest-cni-579606:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 12:18:55.635994  326490 oci.go:107] Successfully prepared a docker volume newest-cni-579606
	I1018 12:18:55.636038  326490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:55.636063  326490 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 12:18:55.636128  326490 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-579606:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 12:18:56.458229  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:18:58.958191  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	I1018 12:19:00.126774  326490 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-579606:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.490575425s)
	I1018 12:19:00.126807  326490 kic.go:203] duration metric: took 4.4907405s to extract preloaded images to volume ...
	W1018 12:19:00.126891  326490 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 12:19:00.126924  326490 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 12:19:00.126991  326490 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 12:19:00.190480  326490 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-579606 --name newest-cni-579606 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-579606 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-579606 --network newest-cni-579606 --ip 192.168.85.2 --volume newest-cni-579606:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 12:19:00.476973  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Running}}
	I1018 12:19:00.495553  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:00.516545  326490 cli_runner.go:164] Run: docker exec newest-cni-579606 stat /var/lib/dpkg/alternatives/iptables
	I1018 12:19:00.562561  326490 oci.go:144] the created container "newest-cni-579606" has a running status.
	I1018 12:19:00.562609  326490 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa...
	I1018 12:19:00.820117  326490 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 12:19:00.854117  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:00.877422  326490 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 12:19:00.877449  326490 kic_runner.go:114] Args: [docker exec --privileged newest-cni-579606 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 12:19:00.925342  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:00.944520  326490 machine.go:93] provisionDockerMachine start ...
	I1018 12:19:00.944616  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:00.964493  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:00.964838  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:00.964858  326490 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:19:01.103775  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-579606
	
	I1018 12:19:01.103807  326490 ubuntu.go:182] provisioning hostname "newest-cni-579606"
	I1018 12:19:01.103880  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.124094  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:01.124376  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:01.124392  326490 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-579606 && echo "newest-cni-579606" | sudo tee /etc/hostname
	I1018 12:19:01.270628  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-579606
	
	I1018 12:19:01.270703  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.289410  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:01.289674  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:01.289696  326490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-579606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-579606/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-579606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:19:01.423556  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:19:01.423583  326490 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:19:01.423603  326490 ubuntu.go:190] setting up certificates
	I1018 12:19:01.423619  326490 provision.go:84] configureAuth start
	I1018 12:19:01.423685  326490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:01.442627  326490 provision.go:143] copyHostCerts
	I1018 12:19:01.442683  326490 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:19:01.442692  326490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:19:01.442779  326490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:19:01.442877  326490 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:19:01.442887  326490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:19:01.442920  326490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:19:01.443028  326490 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:19:01.443058  326490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:19:01.443088  326490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:19:01.443142  326490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.newest-cni-579606 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-579606]
	I1018 12:19:01.605969  326490 provision.go:177] copyRemoteCerts
	I1018 12:19:01.606038  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:19:01.606085  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.625297  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:01.723582  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 12:19:01.744640  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:19:01.763599  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 12:19:01.784423  326490 provision.go:87] duration metric: took 360.788993ms to configureAuth
	I1018 12:19:01.784458  326490 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:19:01.784652  326490 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:01.784752  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.804299  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:01.804508  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:01.804524  326490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:19:02.051413  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:19:02.051436  326490 machine.go:96] duration metric: took 1.106891251s to provisionDockerMachine
	I1018 12:19:02.051444  326490 client.go:171] duration metric: took 6.995862509s to LocalClient.Create
	I1018 12:19:02.051460  326490 start.go:167] duration metric: took 6.995914544s to libmachine.API.Create "newest-cni-579606"
	I1018 12:19:02.051470  326490 start.go:293] postStartSetup for "newest-cni-579606" (driver="docker")
	I1018 12:19:02.051482  326490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:19:02.051542  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:19:02.051582  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.069826  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.169332  326490 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:19:02.173028  326490 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:19:02.173060  326490 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:19:02.173075  326490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:19:02.173131  326490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:19:02.173202  326490 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:19:02.173312  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:19:02.181632  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:19:02.201730  326490 start.go:296] duration metric: took 150.246741ms for postStartSetup
	I1018 12:19:02.202117  326490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:02.220168  326490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json ...
	I1018 12:19:02.220438  326490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:19:02.220477  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.238665  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.333039  326490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:19:02.337804  326490 start.go:128] duration metric: took 7.284234042s to createHost
	I1018 12:19:02.337830  326490 start.go:83] releasing machines lock for "newest-cni-579606", held for 7.284352735s
	I1018 12:19:02.337891  326490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:02.357339  326490 ssh_runner.go:195] Run: cat /version.json
	I1018 12:19:02.357373  326490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:19:02.357386  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.357430  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.376606  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.377490  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.526194  326490 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:02.532929  326490 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:19:02.568991  326490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:19:02.574362  326490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:19:02.574428  326490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:19:02.602949  326490 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 12:19:02.602987  326490 start.go:495] detecting cgroup driver to use...
	I1018 12:19:02.603019  326490 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:19:02.603065  326490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:19:02.619432  326490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:19:02.632985  326490 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:19:02.633047  326490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:19:02.650953  326490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:19:02.670802  326490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:19:02.756116  326490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:19:02.848839  326490 docker.go:234] disabling docker service ...
	I1018 12:19:02.848900  326490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:19:02.868131  326490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:19:02.881575  326490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:19:02.965443  326490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:19:03.051508  326490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:19:03.064380  326490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:19:03.079484  326490 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:19:03.079554  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.090169  326490 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:19:03.090229  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.099749  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.109431  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.118802  326490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:19:03.127410  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.136357  326490 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.151150  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.160956  326490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:19:03.169094  326490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:19:03.177522  326490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:03.257714  326490 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:19:03.374283  326490 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:19:03.374356  326490 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:19:03.378571  326490 start.go:563] Will wait 60s for crictl version
	I1018 12:19:03.378624  326490 ssh_runner.go:195] Run: which crictl
	I1018 12:19:03.382638  326490 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:19:03.406896  326490 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:19:03.406996  326490 ssh_runner.go:195] Run: crio --version
	I1018 12:19:03.436202  326490 ssh_runner.go:195] Run: crio --version
	I1018 12:19:03.466606  326490 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:19:03.468046  326490 cli_runner.go:164] Run: docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:19:03.485613  326490 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 12:19:03.489792  326490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:19:03.502123  326490 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1018 12:19:00.846128  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:19:03.345904  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:19:03.503451  326490 kubeadm.go:883] updating cluster {Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:19:03.503568  326490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:19:03.503623  326490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:19:03.537963  326490 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:19:03.537988  326490 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:19:03.538037  326490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:19:03.564020  326490 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:19:03.564061  326490 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:19:03.564071  326490 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 12:19:03.564172  326490 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-579606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:19:03.564251  326490 ssh_runner.go:195] Run: crio config
	I1018 12:19:03.609404  326490 cni.go:84] Creating CNI manager for ""
	I1018 12:19:03.609430  326490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:19:03.609446  326490 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 12:19:03.609473  326490 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-579606 NodeName:newest-cni-579606 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:19:03.609666  326490 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-579606"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:19:03.609744  326490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:19:03.618201  326490 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:19:03.618283  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:19:03.626679  326490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 12:19:03.639983  326490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:19:03.655953  326490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1018 12:19:03.668846  326490 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:19:03.672666  326490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:19:03.683073  326490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:03.766600  326490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:19:03.797248  326490 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606 for IP: 192.168.85.2
	I1018 12:19:03.797269  326490 certs.go:195] generating shared ca certs ...
	I1018 12:19:03.797296  326490 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:03.797445  326490 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:19:03.797500  326490 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:19:03.797513  326490 certs.go:257] generating profile certs ...
	I1018 12:19:03.797585  326490 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key
	I1018 12:19:03.797609  326490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.crt with IP's: []
	I1018 12:19:04.196975  326490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.crt ...
	I1018 12:19:04.197011  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.crt: {Name:mka42a654d079c2a23058a0f14154e8b79ca5459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.197222  326490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key ...
	I1018 12:19:04.197241  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key: {Name:mk220b04a2afae0bcb10852575c558c1404f1005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.197355  326490 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad
	I1018 12:19:04.197378  326490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 12:19:04.310285  326490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad ...
	I1018 12:19:04.310312  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad: {Name:mke978bbcfe8f1a2cbf3531371f43b4028ef678e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.310509  326490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad ...
	I1018 12:19:04.310528  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad: {Name:mk42b24c0f6b076eda0e07dce8424a94f5271da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.310658  326490 certs.go:382] copying /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad -> /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt
	I1018 12:19:04.310784  326490 certs.go:386] copying /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad -> /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key
	I1018 12:19:04.310873  326490 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key
	I1018 12:19:04.310898  326490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt with IP's: []
	I1018 12:19:04.385339  326490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt ...
	I1018 12:19:04.385370  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt: {Name:mk66f445c5bca9cdd3c55e6ee197ee7cb14dae9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.385567  326490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key ...
	I1018 12:19:04.385584  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key: {Name:mk29fee630df834569bfa6e21a7cc861705c1451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.385849  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem (1338 bytes)
	W1018 12:19:04.385893  326490 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360_empty.pem, impossibly tiny 0 bytes
	I1018 12:19:04.385908  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:19:04.385940  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:19:04.385972  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:19:04.386016  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 12:19:04.386076  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:19:04.386584  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:19:04.405651  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:19:04.423574  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:19:04.441442  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:19:04.460483  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 12:19:04.478325  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:19:04.496004  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:19:04.514077  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:19:04.532154  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem --> /usr/share/ca-certificates/9360.pem (1338 bytes)
	I1018 12:19:04.552898  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /usr/share/ca-certificates/93602.pem (1708 bytes)
	I1018 12:19:04.572871  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:19:04.593879  326490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:19:04.608514  326490 ssh_runner.go:195] Run: openssl version
	I1018 12:19:04.615149  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93602.pem && ln -fs /usr/share/ca-certificates/93602.pem /etc/ssl/certs/93602.pem"
	I1018 12:19:04.624305  326490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93602.pem
	I1018 12:19:04.628375  326490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/93602.pem
	I1018 12:19:04.628425  326490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93602.pem
	I1018 12:19:04.663623  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93602.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:19:04.673411  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:19:04.682605  326490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:04.686974  326490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:04.687061  326490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:04.724063  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:19:04.733543  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9360.pem && ln -fs /usr/share/ca-certificates/9360.pem /etc/ssl/certs/9360.pem"
	I1018 12:19:04.742538  326490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9360.pem
	I1018 12:19:04.746549  326490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9360.pem
	I1018 12:19:04.746601  326490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9360.pem
	I1018 12:19:04.781517  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9360.pem /etc/ssl/certs/51391683.0"
	I1018 12:19:04.791034  326490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:19:04.794955  326490 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:19:04.795012  326490 kubeadm.go:400] StartCluster: {Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:19:04.795092  326490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:04.795154  326490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:04.823284  326490 cri.go:89] found id: ""
	I1018 12:19:04.823356  326490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:19:04.832075  326490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:19:04.840408  326490 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 12:19:04.840478  326490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	W1018 12:19:00.958896  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:03.459593  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:05.845166  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:19:07.344832  317167 pod_ready.go:94] pod "coredns-66bc5c9577-7qgqj" is "Ready"
	I1018 12:19:07.344882  317167 pod_ready.go:86] duration metric: took 37.505154401s for pod "coredns-66bc5c9577-7qgqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.347549  317167 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.351825  317167 pod_ready.go:94] pod "etcd-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:07.351851  317167 pod_ready.go:86] duration metric: took 4.270969ms for pod "etcd-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.353893  317167 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.357781  317167 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:07.357802  317167 pod_ready.go:86] duration metric: took 3.889439ms for pod "kube-apiserver-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.359743  317167 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.543689  317167 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:07.543718  317167 pod_ready.go:86] duration metric: took 183.92899ms for pod "kube-controller-manager-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.742726  317167 pod_ready.go:83] waiting for pod "kube-proxy-bffkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.142748  317167 pod_ready.go:94] pod "kube-proxy-bffkr" is "Ready"
	I1018 12:19:08.142797  317167 pod_ready.go:86] duration metric: took 400.045074ms for pod "kube-proxy-bffkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.343168  317167 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.743587  317167 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:08.743618  317167 pod_ready.go:86] duration metric: took 400.420854ms for pod "kube-scheduler-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.743633  317167 pod_ready.go:40] duration metric: took 38.908363338s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:19:08.790224  317167 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:08.792295  317167 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-028309" cluster and "default" namespace by default
	I1018 12:19:04.849545  326490 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:19:04.849562  326490 kubeadm.go:157] found existing configuration files:
	
	I1018 12:19:04.849600  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:19:04.857827  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:19:04.857889  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:19:04.865939  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:19:04.873915  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:19:04.873983  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:19:04.881861  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:19:04.890019  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:19:04.890088  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:19:04.898082  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:19:04.906181  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:19:04.906236  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:19:04.914044  326490 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 12:19:04.975919  326490 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 12:19:05.037824  326490 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1018 12:19:05.957990  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:07.958857  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:09.958915  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:12.459097  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	I1018 12:19:14.458133  319485 pod_ready.go:94] pod "coredns-66bc5c9577-b6h9l" is "Ready"
	I1018 12:19:14.458159  319485 pod_ready.go:86] duration metric: took 31.505202758s for pod "coredns-66bc5c9577-b6h9l" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.459959  319485 pod_ready.go:83] waiting for pod "etcd-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.463248  319485 pod_ready.go:94] pod "etcd-embed-certs-175371" is "Ready"
	I1018 12:19:14.463270  319485 pod_ready.go:86] duration metric: took 3.284914ms for pod "etcd-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.465089  319485 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.468551  319485 pod_ready.go:94] pod "kube-apiserver-embed-certs-175371" is "Ready"
	I1018 12:19:14.468570  319485 pod_ready.go:86] duration metric: took 3.458555ms for pod "kube-apiserver-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.470303  319485 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.657339  319485 pod_ready.go:94] pod "kube-controller-manager-embed-certs-175371" is "Ready"
	I1018 12:19:14.657367  319485 pod_ready.go:86] duration metric: took 187.044696ms for pod "kube-controller-manager-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.856446  319485 pod_ready.go:83] waiting for pod "kube-proxy-t2x4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.257025  319485 pod_ready.go:94] pod "kube-proxy-t2x4c" is "Ready"
	I1018 12:19:15.257053  319485 pod_ready.go:86] duration metric: took 400.581639ms for pod "kube-proxy-t2x4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.456953  319485 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.893038  326490 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 12:19:15.893090  326490 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 12:19:15.893217  326490 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 12:19:15.893353  326490 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 12:19:15.893498  326490 kubeadm.go:318] OS: Linux
	I1018 12:19:15.893566  326490 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 12:19:15.893627  326490 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 12:19:15.893696  326490 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 12:19:15.893776  326490 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 12:19:15.893850  326490 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 12:19:15.893910  326490 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 12:19:15.893969  326490 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 12:19:15.894035  326490 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 12:19:15.894133  326490 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 12:19:15.894281  326490 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 12:19:15.894412  326490 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 12:19:15.894516  326490 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 12:19:15.896254  326490 out.go:252]   - Generating certificates and keys ...
	I1018 12:19:15.896337  326490 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 12:19:15.896412  326490 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 12:19:15.896489  326490 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 12:19:15.896543  326490 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:19:15.896599  326490 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:19:15.896657  326490 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:19:15.896708  326490 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:19:15.896861  326490 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-579606] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 12:19:15.896916  326490 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:19:15.897021  326490 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-579606] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 12:19:15.897080  326490 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:19:15.897134  326490 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:19:15.897176  326490 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:19:15.897227  326490 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:19:15.897280  326490 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:19:15.897332  326490 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:19:15.897378  326490 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:19:15.897435  326490 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:19:15.897486  326490 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:19:15.897560  326490 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:19:15.897622  326490 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 12:19:15.899813  326490 out.go:252]   - Booting up control plane ...
	I1018 12:19:15.899904  326490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:19:15.899977  326490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:19:15.900053  326490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:19:15.900169  326490 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:19:15.900307  326490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:19:15.900475  326490 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:19:15.900586  326490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:19:15.900647  326490 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:19:15.900835  326490 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:19:15.900980  326490 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:19:15.901059  326490 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501237256s
	I1018 12:19:15.901160  326490 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:19:15.901257  326490 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 12:19:15.901388  326490 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:19:15.901499  326490 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:19:15.901562  326490 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.520322183s
	I1018 12:19:15.901615  326490 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.051874304s
	I1018 12:19:15.901668  326490 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001667177s
	I1018 12:19:15.901817  326490 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:19:15.902084  326490 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:19:15.902160  326490 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:19:15.902393  326490 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-579606 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:19:15.902484  326490 kubeadm.go:318] [bootstrap-token] Using token: pmkr01.67na6m3iuf7b6wke
	I1018 12:19:15.904615  326490 out.go:252]   - Configuring RBAC rules ...
	I1018 12:19:15.904796  326490 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 12:19:15.904875  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 12:19:15.905028  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 12:19:15.905156  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 12:19:15.905290  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 12:19:15.905391  326490 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 12:19:15.905553  326490 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 12:19:15.905613  326490 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 12:19:15.905676  326490 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 12:19:15.905684  326490 kubeadm.go:318] 
	I1018 12:19:15.905730  326490 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 12:19:15.905736  326490 kubeadm.go:318] 
	I1018 12:19:15.905836  326490 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 12:19:15.905852  326490 kubeadm.go:318] 
	I1018 12:19:15.905891  326490 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 12:19:15.905967  326490 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 12:19:15.906032  326490 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 12:19:15.906040  326490 kubeadm.go:318] 
	I1018 12:19:15.906120  326490 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 12:19:15.906130  326490 kubeadm.go:318] 
	I1018 12:19:15.906195  326490 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 12:19:15.906216  326490 kubeadm.go:318] 
	I1018 12:19:15.906289  326490 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 12:19:15.906393  326490 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 12:19:15.906490  326490 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 12:19:15.906500  326490 kubeadm.go:318] 
	I1018 12:19:15.906596  326490 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 12:19:15.906826  326490 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 12:19:15.906844  326490 kubeadm.go:318] 
	I1018 12:19:15.906936  326490 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token pmkr01.67na6m3iuf7b6wke \
	I1018 12:19:15.907119  326490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d \
	I1018 12:19:15.907164  326490 kubeadm.go:318] 	--control-plane 
	I1018 12:19:15.907173  326490 kubeadm.go:318] 
	I1018 12:19:15.907323  326490 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 12:19:15.907337  326490 kubeadm.go:318] 
	I1018 12:19:15.907436  326490 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token pmkr01.67na6m3iuf7b6wke \
	I1018 12:19:15.907606  326490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d 
	I1018 12:19:15.907623  326490 cni.go:84] Creating CNI manager for ""
	I1018 12:19:15.907632  326490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:19:15.857063  319485 pod_ready.go:94] pod "kube-scheduler-embed-certs-175371" is "Ready"
	I1018 12:19:15.857091  319485 pod_ready.go:86] duration metric: took 400.110605ms for pod "kube-scheduler-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.857103  319485 pod_ready.go:40] duration metric: took 32.907623738s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:19:15.908233  319485 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:15.909420  326490 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 12:19:15.910368  319485 out.go:179] * Done! kubectl is now configured to use "embed-certs-175371" cluster and "default" namespace by default
	I1018 12:19:15.911428  326490 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 12:19:15.916203  326490 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 12:19:15.916223  326490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 12:19:15.930716  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 12:19:16.186811  326490 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:19:16.186877  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:16.186927  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-579606 minikube.k8s.io/updated_at=2025_10_18T12_19_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=newest-cni-579606 minikube.k8s.io/primary=true
	I1018 12:19:16.200483  326490 ops.go:34] apiserver oom_adj: -16
	I1018 12:19:16.289962  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:16.790297  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:17.290815  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:17.790675  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:18.290971  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:18.791051  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:19.291007  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:19.790041  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:20.290948  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:20.364194  326490 kubeadm.go:1113] duration metric: took 4.177366872s to wait for elevateKubeSystemPrivileges
	I1018 12:19:20.364236  326490 kubeadm.go:402] duration metric: took 15.569226889s to StartCluster
	I1018 12:19:20.364257  326490 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:20.364341  326490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:19:20.366539  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:20.366808  326490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 12:19:20.366823  326490 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:19:20.366886  326490 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:19:20.366978  326490 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-579606"
	I1018 12:19:20.366998  326490 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-579606"
	I1018 12:19:20.367029  326490 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:20.367046  326490 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:20.367047  326490 addons.go:69] Setting default-storageclass=true in profile "newest-cni-579606"
	I1018 12:19:20.367088  326490 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-579606"
	I1018 12:19:20.367465  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:20.367552  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:20.368575  326490 out.go:179] * Verifying Kubernetes components...
	I1018 12:19:20.370326  326490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:20.394477  326490 addons.go:238] Setting addon default-storageclass=true in "newest-cni-579606"
	I1018 12:19:20.394522  326490 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:20.394869  326490 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:19:20.395017  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:20.396676  326490 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:19:20.396702  326490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:19:20.396772  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:20.423305  326490 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:19:20.423405  326490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:19:20.423499  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:20.423817  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:20.453744  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:20.465106  326490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 12:19:20.532388  326490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:19:20.546306  326490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:19:20.568683  326490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:19:20.669063  326490 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 12:19:20.670556  326490 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:19:20.670609  326490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:19:20.899558  326490 api_server.go:72] duration metric: took 532.701277ms to wait for apiserver process to appear ...
	I1018 12:19:20.899596  326490 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:19:20.899623  326490 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:20.906703  326490 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 12:19:20.907612  326490 api_server.go:141] control plane version: v1.34.1
	I1018 12:19:20.907641  326490 api_server.go:131] duration metric: took 8.037799ms to wait for apiserver health ...
	I1018 12:19:20.907652  326490 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:19:20.909941  326490 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 12:19:20.911175  326490 addons.go:514] duration metric: took 544.288646ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 12:19:20.911194  326490 system_pods.go:59] 8 kube-system pods found
	I1018 12:19:20.911217  326490 system_pods.go:61] "coredns-66bc5c9577-p6bts" [49609244-6dc2-4950-8fad-8240b827ecca] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 12:19:20.911224  326490 system_pods.go:61] "etcd-newest-cni-579606" [496c00b4-7ad1-40c0-a440-c396a752cbf4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:19:20.911231  326490 system_pods.go:61] "kindnet-2c4t6" [08c0018d-0f0f-435e-8868-31818d5639fa] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:19:20.911238  326490 system_pods.go:61] "kube-apiserver-newest-cni-579606" [a39961c7-019e-41ec-8843-e98e9c2e3604] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:19:20.911249  326490 system_pods.go:61] "kube-controller-manager-newest-cni-579606" [992bd82d-6489-43da-83ba-8dcb6b86fe48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:19:20.911262  326490 system_pods.go:61] "kube-proxy-5hjgn" [915df613-23ce-49e2-b125-d223024077b0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:19:20.911291  326490 system_pods.go:61] "kube-scheduler-newest-cni-579606" [2a1de39e-4fa6-49e8-a420-75a6c82ac73e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:19:20.911306  326490 system_pods.go:61] "storage-provisioner" [c7ff4c04-56e5-469b-9af2-dc1bf4fe969d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 12:19:20.911314  326490 system_pods.go:74] duration metric: took 3.655766ms to wait for pod list to return data ...
	I1018 12:19:20.911324  326490 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:19:20.913681  326490 default_sa.go:45] found service account: "default"
	I1018 12:19:20.913702  326490 default_sa.go:55] duration metric: took 2.371901ms for default service account to be created ...
	I1018 12:19:20.913712  326490 kubeadm.go:586] duration metric: took 546.861004ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 12:19:20.913730  326490 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:19:20.916084  326490 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:19:20.916105  326490 node_conditions.go:123] node cpu capacity is 8
	I1018 12:19:20.916117  326490 node_conditions.go:105] duration metric: took 2.382506ms to run NodePressure ...
	I1018 12:19:20.916128  326490 start.go:241] waiting for startup goroutines ...
	I1018 12:19:21.173827  326490 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-579606" context rescaled to 1 replicas
	I1018 12:19:21.173870  326490 start.go:246] waiting for cluster config update ...
	I1018 12:19:21.173882  326490 start.go:255] writing updated cluster config ...
	I1018 12:19:21.174193  326490 ssh_runner.go:195] Run: rm -f paused
	I1018 12:19:21.223166  326490 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:21.225317  326490 out.go:179] * Done! kubectl is now configured to use "newest-cni-579606" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 12:18:39 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:39.57108058Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 12:18:39 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:39.779057419Z" level=info msg="Removing container: 0dc9ec88678ebd70c0850aeb79412ea4470360e0cfcd0a1f70b1429ae6644963" id=9a7b6e65-c021-4dd2-a7c8-24357f84f8c6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:18:39 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:39.793295763Z" level=info msg="Removed container 0dc9ec88678ebd70c0850aeb79412ea4470360e0cfcd0a1f70b1429ae6644963: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6/dashboard-metrics-scraper" id=9a7b6e65-c021-4dd2-a7c8-24357f84f8c6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.709550204Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7ab41b1e-9ddb-4954-82ab-3778cac993d6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.713025013Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=895c0ccb-22bd-413a-a66e-e5dc0445b3b5 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.716577468Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6/dashboard-metrics-scraper" id=12898ce6-b9f8-4bb4-8daf-6810e70845ae name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.719104037Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.728278528Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.728960268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.766546895Z" level=info msg="Created container 6ef023ef21b14bff971ec47fc55a7ec6c3d7bcc299038c2b4624ba8d4e33f5d2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6/dashboard-metrics-scraper" id=12898ce6-b9f8-4bb4-8daf-6810e70845ae name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.767261324Z" level=info msg="Starting container: 6ef023ef21b14bff971ec47fc55a7ec6c3d7bcc299038c2b4624ba8d4e33f5d2" id=b29e4fc0-7cc4-4bf3-aac4-c8e6935302ed name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.769680437Z" level=info msg="Started container" PID=1775 containerID=6ef023ef21b14bff971ec47fc55a7ec6c3d7bcc299038c2b4624ba8d4e33f5d2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6/dashboard-metrics-scraper id=b29e4fc0-7cc4-4bf3-aac4-c8e6935302ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=d813324b7a87994aebddb320d998d445925afdb7cec91d6a467aa9ee8202f79c
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.826246681Z" level=info msg="Removing container: 6b9479e8ac443821a49c0d64515fcf19468741bbf01754cab327588eca64ac9c" id=078d2263-0627-484b-9b6b-eebdc95fb449 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:18:54 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:54.83738487Z" level=info msg="Removed container 6b9479e8ac443821a49c0d64515fcf19468741bbf01754cab327588eca64ac9c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6/dashboard-metrics-scraper" id=078d2263-0627-484b-9b6b-eebdc95fb449 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:18:59 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:59.842449416Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a2437e6a-0e71-4a94-86c9-d3e8f5d2812f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:59 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:59.938274156Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e2a7fe16-e512-4c93-a952-5d2945272074 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:18:59 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:59.961071175Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7aeaac60-e4fd-4a3b-8878-0cdb348d2cc3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:18:59 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:18:59.961402227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:00 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:19:00.098170721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:00 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:19:00.098322806Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5a093b1a960020e8b1243dad9604b3824b6eaf08228cfc1d62dbf4062cd5f465/merged/etc/passwd: no such file or directory"
	Oct 18 12:19:00 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:19:00.098346358Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5a093b1a960020e8b1243dad9604b3824b6eaf08228cfc1d62dbf4062cd5f465/merged/etc/group: no such file or directory"
	Oct 18 12:19:00 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:19:00.099480692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:00 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:19:00.126906625Z" level=info msg="Created container 7badc800fa4039e5ced42d3de7cb9486ff1368bed00b2093776a0935921d9a3d: kube-system/storage-provisioner/storage-provisioner" id=7aeaac60-e4fd-4a3b-8878-0cdb348d2cc3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:00 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:19:00.127801407Z" level=info msg="Starting container: 7badc800fa4039e5ced42d3de7cb9486ff1368bed00b2093776a0935921d9a3d" id=b7a0cacf-7e6f-44df-8834-112a2c33f171 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:19:00 default-k8s-diff-port-028309 crio[559]: time="2025-10-18T12:19:00.129879368Z" level=info msg="Started container" PID=1789 containerID=7badc800fa4039e5ced42d3de7cb9486ff1368bed00b2093776a0935921d9a3d description=kube-system/storage-provisioner/storage-provisioner id=b7a0cacf-7e6f-44df-8834-112a2c33f171 name=/runtime.v1.RuntimeService/StartContainer sandboxID=65e4b9b67d10b51a01e0df6de82304a1bf98eec7ec885b2e85ebe735e7a60358
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	7badc800fa403       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   65e4b9b67d10b       storage-provisioner                                    kube-system
	6ef023ef21b14       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago      Exited              dashboard-metrics-scraper   2                   d813324b7a879       dashboard-metrics-scraper-6ffb444bf9-tq7v6             kubernetes-dashboard
	4b69327aa0d0a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago      Running             kubernetes-dashboard        0                   0d906b90aa6bd       kubernetes-dashboard-855c9754f9-lmkc8                  kubernetes-dashboard
	3a791b10f6b72       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   83c1e5ead4a6e       coredns-66bc5c9577-7qgqj                               kube-system
	030516fe569e1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   c17889afe31a4       busybox                                                default
	3d8531f8819a1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   a291fe8320284       kube-proxy-bffkr                                       kube-system
	beda0d0ad2456       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   1050ac19a66bb       kindnet-hbfgg                                          kube-system
	134c68115df40       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   65e4b9b67d10b       storage-provisioner                                    kube-system
	47b0a89c606a2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   49e6226018b07       kube-apiserver-default-k8s-diff-port-028309            kube-system
	98cd3ecd97b52       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   4c1e3a255496d       kube-scheduler-default-k8s-diff-port-028309            kube-system
	b4e6ed35e6415       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   2a56df8397d44       kube-controller-manager-default-k8s-diff-port-028309   kube-system
	7f679fa5b11a9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   c7991f4db00c1       etcd-default-k8s-diff-port-028309                      kube-system
	
	
	==> coredns [3a791b10f6b7292113c4ab4334268fa9103739de78ecf9577cda655bc7e04ad8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57614 - 60558 "HINFO IN 388194415275680658.1841293297904610492. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.049800991s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-028309
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-028309
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=default-k8s-diff-port-028309
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_17_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:17:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-028309
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:19:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:19:19 +0000   Sat, 18 Oct 2025 12:17:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:19:19 +0000   Sat, 18 Oct 2025 12:17:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:19:19 +0000   Sat, 18 Oct 2025 12:17:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:19:19 +0000   Sat, 18 Oct 2025 12:17:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-028309
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                ff570318-6181-45ed-80f8-45dccb2d1794
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-7qgqj                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-default-k8s-diff-port-028309                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-hbfgg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-028309             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-028309    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-bffkr                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-028309             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-tq7v6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lmkc8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     115s               kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node default-k8s-diff-port-028309 event: Registered Node default-k8s-diff-port-028309 in Controller
	  Normal  NodeReady                98s                kubelet          Node default-k8s-diff-port-028309 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-028309 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node default-k8s-diff-port-028309 event: Registered Node default-k8s-diff-port-028309 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +11.948953] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 93 07 de 40 6d 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 2f a5 3a 37 fc 08 06
	[  +0.204454] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 4b 47 1f ce e5 08 06
	[Oct18 12:16] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 88 62 1b dd a7 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 f1 aa 42 b3 1d 08 06
	[  +0.000901] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +26.035563] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 9e 15 3f 0e e1 08 06
	[  +0.000631] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 55 46 ae a1 7f 08 06
	[  +2.492998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 63 10 7e 7b f1 08 06
	[  +0.001695] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	[ +18.118461] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e eb 77 72 c6 18 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	
	
	==> etcd [7f679fa5b11a9e7c241aa782944e0a63d28817b54b5a1f2424c606492f4167fd] <==
	{"level":"warn","ts":"2025-10-18T12:18:27.625164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.631838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.638345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.644919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.651337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.659141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.666430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.675965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.684290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.693888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.702174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.710966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.718477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.727259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.734945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.741567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.755862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.762641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.778188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.785221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.791838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:27.842986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57972","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:18:58.801538Z","caller":"traceutil/trace.go:172","msg":"trace[1821153281] transaction","detail":"{read_only:false; response_revision:654; number_of_response:1; }","duration":"123.719441ms","start":"2025-10-18T12:18:58.677795Z","end":"2025-10-18T12:18:58.801514Z","steps":["trace[1821153281] 'process raft request'  (duration: 123.587121ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:18:59.693798Z","caller":"traceutil/trace.go:172","msg":"trace[201754330] transaction","detail":"{read_only:false; response_revision:657; number_of_response:1; }","duration":"142.413308ms","start":"2025-10-18T12:18:59.551337Z","end":"2025-10-18T12:18:59.693751Z","steps":["trace[201754330] 'process raft request'  (duration: 128.118927ms)","trace[201754330] 'compare'  (duration: 14.174445ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T12:19:00.098886Z","caller":"traceutil/trace.go:172","msg":"trace[480506682] transaction","detail":"{read_only:false; response_revision:659; number_of_response:1; }","duration":"249.597908ms","start":"2025-10-18T12:18:59.849269Z","end":"2025-10-18T12:19:00.098867Z","steps":["trace[480506682] 'process raft request'  (duration: 249.456601ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:19:25 up  1:01,  0 user,  load average: 3.21, 3.86, 2.61
	Linux default-k8s-diff-port-028309 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [beda0d0ad2456588c42c64e748d9c9a3a59ec5a890826c601cd42d1a48c80717] <==
	I1018 12:18:29.334277       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:18:29.334615       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 12:18:29.334848       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:18:29.334869       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:18:29.334890       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:18:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:18:29.537834       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:18:29.634176       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:18:29.634323       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:18:29.634627       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:18:30.034513       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:18:30.034549       1 metrics.go:72] Registering metrics
	I1018 12:18:30.034624       1 controller.go:711] "Syncing nftables rules"
	I1018 12:18:39.537948       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 12:18:39.538049       1 main.go:301] handling current node
	I1018 12:18:49.544854       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 12:18:49.544904       1 main.go:301] handling current node
	I1018 12:18:59.537882       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 12:18:59.537943       1 main.go:301] handling current node
	I1018 12:19:09.539198       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 12:19:09.539282       1 main.go:301] handling current node
	I1018 12:19:19.537491       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1018 12:19:19.537534       1 main.go:301] handling current node
	
	
	==> kube-apiserver [47b0a89c606a2ed0c69b3d57a1254c989803ac5ff1e9913ca52c6c7b7c451aa9] <==
	I1018 12:18:28.316899       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 12:18:28.316604       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 12:18:28.317526       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 12:18:28.317570       1 aggregator.go:171] initial CRD sync complete...
	I1018 12:18:28.317579       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 12:18:28.317584       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 12:18:28.317590       1 cache.go:39] Caches are synced for autoregister controller
	I1018 12:18:28.318702       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 12:18:28.318724       1 policy_source.go:240] refreshing policies
	I1018 12:18:28.321425       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 12:18:28.325916       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 12:18:28.358297       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:18:28.369161       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:18:28.569663       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:18:28.600568       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:18:28.625500       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:18:28.643492       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:18:28.653205       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:18:28.694506       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.27.82"}
	I1018 12:18:28.706927       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.163.242"}
	I1018 12:18:29.219799       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:18:31.698146       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:18:32.049505       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:18:32.200433       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:18:32.200433       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [b4e6ed35e6415d74f156e6f9b2caf8f4eee3580d9a2b0e69aa0489217f5ecff8] <==
	I1018 12:18:31.613033       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:18:31.616248       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 12:18:31.619548       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 12:18:31.624845       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 12:18:31.628134       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 12:18:31.645533       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 12:18:31.645548       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 12:18:31.645651       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 12:18:31.645676       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:18:31.645695       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:18:31.645710       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:18:31.645856       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 12:18:31.646218       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 12:18:31.646299       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 12:18:31.646303       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 12:18:31.646592       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 12:18:31.648052       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 12:18:31.648906       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 12:18:31.649141       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 12:18:31.649317       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 12:18:31.649340       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:18:31.650975       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 12:18:31.654513       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:18:31.664841       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:18:31.669277       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [3d8531f8819a155bae8f5276bec64b4d55f23d29586c6dc59ecee2e01d0eac4c] <==
	I1018 12:18:29.105755       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:18:29.162636       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:18:29.263297       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:18:29.263353       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1018 12:18:29.263511       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:18:29.286860       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:18:29.286924       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:18:29.293424       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:18:29.294062       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:18:29.294103       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:18:29.295590       1 config.go:200] "Starting service config controller"
	I1018 12:18:29.295612       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:18:29.295817       1 config.go:309] "Starting node config controller"
	I1018 12:18:29.295874       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:18:29.295886       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:18:29.296089       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:18:29.296096       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:18:29.296133       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:18:29.296151       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:18:29.395832       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:18:29.397041       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:18:29.397081       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [98cd3ecd97b52b4667430825deaaf5b42f0481bce7f80bdb63cc7d18be3f2c43] <==
	I1018 12:18:26.620180       1 serving.go:386] Generated self-signed cert in-memory
	W1018 12:18:28.227517       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 12:18:28.227555       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 12:18:28.227567       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 12:18:28.227576       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 12:18:28.286125       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 12:18:28.286159       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:18:28.289098       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:18:28.289136       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:18:28.290191       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:18:28.290272       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 12:18:28.389534       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:18:37 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:37.042185     721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 12:18:37 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:37.721455     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lmkc8" podStartSLOduration=2.921802052 podStartE2EDuration="5.721425861s" podCreationTimestamp="2025-10-18 12:18:32 +0000 UTC" firstStartedPulling="2025-10-18 12:18:32.503369237 +0000 UTC m=+6.883742991" lastFinishedPulling="2025-10-18 12:18:35.302993046 +0000 UTC m=+9.683366800" observedRunningTime="2025-10-18 12:18:35.77453946 +0000 UTC m=+10.154913245" watchObservedRunningTime="2025-10-18 12:18:37.721425861 +0000 UTC m=+12.101799633"
	Oct 18 12:18:38 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:38.608943     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6" podStartSLOduration=1.423096885 podStartE2EDuration="6.608919707s" podCreationTimestamp="2025-10-18 12:18:32 +0000 UTC" firstStartedPulling="2025-10-18 12:18:32.503549139 +0000 UTC m=+6.883922903" lastFinishedPulling="2025-10-18 12:18:37.689371973 +0000 UTC m=+12.069745725" observedRunningTime="2025-10-18 12:18:37.776448308 +0000 UTC m=+12.156822080" watchObservedRunningTime="2025-10-18 12:18:38.608919707 +0000 UTC m=+12.989293479"
	Oct 18 12:18:38 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:38.768400     721 scope.go:117] "RemoveContainer" containerID="0dc9ec88678ebd70c0850aeb79412ea4470360e0cfcd0a1f70b1429ae6644963"
	Oct 18 12:18:39 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:39.774803     721 scope.go:117] "RemoveContainer" containerID="0dc9ec88678ebd70c0850aeb79412ea4470360e0cfcd0a1f70b1429ae6644963"
	Oct 18 12:18:39 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:39.775181     721 scope.go:117] "RemoveContainer" containerID="6b9479e8ac443821a49c0d64515fcf19468741bbf01754cab327588eca64ac9c"
	Oct 18 12:18:39 default-k8s-diff-port-028309 kubelet[721]: E1018 12:18:39.775354     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tq7v6_kubernetes-dashboard(71b0408d-e77e-48df-8889-7483cda6310e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6" podUID="71b0408d-e77e-48df-8889-7483cda6310e"
	Oct 18 12:18:40 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:40.779330     721 scope.go:117] "RemoveContainer" containerID="6b9479e8ac443821a49c0d64515fcf19468741bbf01754cab327588eca64ac9c"
	Oct 18 12:18:40 default-k8s-diff-port-028309 kubelet[721]: E1018 12:18:40.779564     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tq7v6_kubernetes-dashboard(71b0408d-e77e-48df-8889-7483cda6310e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6" podUID="71b0408d-e77e-48df-8889-7483cda6310e"
	Oct 18 12:18:41 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:41.782254     721 scope.go:117] "RemoveContainer" containerID="6b9479e8ac443821a49c0d64515fcf19468741bbf01754cab327588eca64ac9c"
	Oct 18 12:18:41 default-k8s-diff-port-028309 kubelet[721]: E1018 12:18:41.782479     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tq7v6_kubernetes-dashboard(71b0408d-e77e-48df-8889-7483cda6310e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6" podUID="71b0408d-e77e-48df-8889-7483cda6310e"
	Oct 18 12:18:54 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:54.708869     721 scope.go:117] "RemoveContainer" containerID="6b9479e8ac443821a49c0d64515fcf19468741bbf01754cab327588eca64ac9c"
	Oct 18 12:18:54 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:54.823741     721 scope.go:117] "RemoveContainer" containerID="6b9479e8ac443821a49c0d64515fcf19468741bbf01754cab327588eca64ac9c"
	Oct 18 12:18:54 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:54.824034     721 scope.go:117] "RemoveContainer" containerID="6ef023ef21b14bff971ec47fc55a7ec6c3d7bcc299038c2b4624ba8d4e33f5d2"
	Oct 18 12:18:54 default-k8s-diff-port-028309 kubelet[721]: E1018 12:18:54.824249     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tq7v6_kubernetes-dashboard(71b0408d-e77e-48df-8889-7483cda6310e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6" podUID="71b0408d-e77e-48df-8889-7483cda6310e"
	Oct 18 12:18:59 default-k8s-diff-port-028309 kubelet[721]: I1018 12:18:59.841934     721 scope.go:117] "RemoveContainer" containerID="134c68115df400299f718a242dcc3487786865366d4c86ae9057813ce2261cb7"
	Oct 18 12:19:01 default-k8s-diff-port-028309 kubelet[721]: I1018 12:19:01.768803     721 scope.go:117] "RemoveContainer" containerID="6ef023ef21b14bff971ec47fc55a7ec6c3d7bcc299038c2b4624ba8d4e33f5d2"
	Oct 18 12:19:01 default-k8s-diff-port-028309 kubelet[721]: E1018 12:19:01.769005     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tq7v6_kubernetes-dashboard(71b0408d-e77e-48df-8889-7483cda6310e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6" podUID="71b0408d-e77e-48df-8889-7483cda6310e"
	Oct 18 12:19:13 default-k8s-diff-port-028309 kubelet[721]: I1018 12:19:13.709218     721 scope.go:117] "RemoveContainer" containerID="6ef023ef21b14bff971ec47fc55a7ec6c3d7bcc299038c2b4624ba8d4e33f5d2"
	Oct 18 12:19:13 default-k8s-diff-port-028309 kubelet[721]: E1018 12:19:13.709478     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tq7v6_kubernetes-dashboard(71b0408d-e77e-48df-8889-7483cda6310e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tq7v6" podUID="71b0408d-e77e-48df-8889-7483cda6310e"
	Oct 18 12:19:20 default-k8s-diff-port-028309 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 12:19:20 default-k8s-diff-port-028309 kubelet[721]: I1018 12:19:20.986650     721 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 18 12:19:21 default-k8s-diff-port-028309 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 12:19:21 default-k8s-diff-port-028309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 12:19:21 default-k8s-diff-port-028309 systemd[1]: kubelet.service: Consumed 1.879s CPU time.
	
	
	==> kubernetes-dashboard [4b69327aa0d0a64fdafbee660e64555b3ddd443d95b2e8615a545e1a1776ef12] <==
	2025/10/18 12:18:35 Starting overwatch
	2025/10/18 12:18:35 Using namespace: kubernetes-dashboard
	2025/10/18 12:18:35 Using in-cluster config to connect to apiserver
	2025/10/18 12:18:35 Using secret token for csrf signing
	2025/10/18 12:18:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 12:18:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 12:18:35 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 12:18:35 Generating JWE encryption key
	2025/10/18 12:18:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 12:18:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 12:18:35 Initializing JWE encryption key from synchronized object
	2025/10/18 12:18:35 Creating in-cluster Sidecar client
	2025/10/18 12:18:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 12:18:35 Serving insecurely on HTTP port: 9090
	2025/10/18 12:19:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [134c68115df400299f718a242dcc3487786865366d4c86ae9057813ce2261cb7] <==
	I1018 12:18:29.070585       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 12:18:59.075248       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7badc800fa4039e5ced42d3de7cb9486ff1368bed00b2093776a0935921d9a3d] <==
	I1018 12:19:00.144568       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 12:19:00.154271       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 12:19:00.154325       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 12:19:00.157908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:03.613477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:07.874120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:11.472272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:14.526939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:17.549440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:17.554017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:19:17.554204       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 12:19:17.554289       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b5d62124-6ee2-44d3-a6fa-ae6c6c57818d", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-028309_0d9d13a4-48ec-4a17-97e6-cc2f1b28adb6 became leader
	I1018 12:19:17.554358       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-028309_0d9d13a4-48ec-4a17-97e6-cc2f1b28adb6!
	W1018 12:19:17.557072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:17.560031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:19:17.654778       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-028309_0d9d13a4-48ec-4a17-97e6-cc2f1b28adb6!
	W1018 12:19:19.563816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:19.568797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:21.573489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:21.578679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:23.582797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:23.587976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:25.591728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:25.596258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-028309 -n default-k8s-diff-port-028309
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-028309 -n default-k8s-diff-port-028309: exit status 2 (316.733247ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-028309 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-579606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-579606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (243.152244ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-579606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-579606
helpers_test.go:243: (dbg) docker inspect newest-cni-579606:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681",
	        "Created": "2025-10-18T12:19:00.208907647Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:19:00.247070871Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681/hostname",
	        "HostsPath": "/var/lib/docker/containers/641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681/hosts",
	        "LogPath": "/var/lib/docker/containers/641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681/641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681-json.log",
	        "Name": "/newest-cni-579606",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-579606:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-579606",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681",
	                "LowerDir": "/var/lib/docker/overlay2/ae8b372d5d03b5e68857f1e6e0aaeffa62edde2d277675d121e64bd92805a717-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae8b372d5d03b5e68857f1e6e0aaeffa62edde2d277675d121e64bd92805a717/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae8b372d5d03b5e68857f1e6e0aaeffa62edde2d277675d121e64bd92805a717/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae8b372d5d03b5e68857f1e6e0aaeffa62edde2d277675d121e64bd92805a717/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-579606",
	                "Source": "/var/lib/docker/volumes/newest-cni-579606/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-579606",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-579606",
	                "name.minikube.sigs.k8s.io": "newest-cni-579606",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "760df1f8b2912d244cd77610b12371d486be78eb741fcc7777a5036f60392771",
	            "SandboxKey": "/var/run/docker/netns/760df1f8b291",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-579606": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:24:9f:ba:56:2f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7f1c73ac1e12d550471cb62895be2add81ac8cf17de04960f0eacccc32c8d7ed",
	                    "EndpointID": "e04efc031755602e60a86e14487cbff9efaa73329c2c86fc359e1d4ec54351d5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-579606",
	                        "641d4379c21a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-579606 -n newest-cni-579606
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-579606 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-579606 logs -n 25: (1.024369867s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-406541 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-024443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p old-k8s-version-024443 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable dashboard -p no-preload-406541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:17 UTC │
	│ start   │ -p no-preload-406541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-028309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-028309 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-175371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ stop    │ -p embed-certs-175371 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-028309 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-175371 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p embed-certs-175371 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ image   │ no-preload-406541 image list --format=json                                                                                                                                                                                                    │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p no-preload-406541 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ image   │ old-k8s-version-024443 image list --format=json                                                                                                                                                                                               │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p old-k8s-version-024443 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ delete  │ -p no-preload-406541                                                                                                                                                                                                                          │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ delete  │ -p old-k8s-version-024443                                                                                                                                                                                                                     │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ delete  │ -p old-k8s-version-024443                                                                                                                                                                                                                     │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p newest-cni-579606 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ delete  │ -p no-preload-406541                                                                                                                                                                                                                          │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ default-k8s-diff-port-028309 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ pause   │ -p default-k8s-diff-port-028309 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-579606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:18:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:18:54.845878  326490 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:18:54.846118  326490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:54.846127  326490 out.go:374] Setting ErrFile to fd 2...
	I1018 12:18:54.846131  326490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:54.846326  326490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:18:54.846865  326490 out.go:368] Setting JSON to false
	I1018 12:18:54.848113  326490 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3683,"bootTime":1760786252,"procs":381,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:18:54.848206  326490 start.go:141] virtualization: kvm guest
	I1018 12:18:54.851418  326490 out.go:179] * [newest-cni-579606] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:18:54.856390  326490 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:18:54.856377  326490 notify.go:220] Checking for updates...
	I1018 12:18:54.857910  326490 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:18:54.859215  326490 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:54.860446  326490 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:18:54.861847  326490 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:18:54.863137  326490 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:18:54.864900  326490 config.go:182] Loaded profile config "default-k8s-diff-port-028309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:54.864984  326490 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:54.865092  326490 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:18:54.888492  326490 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:18:54.888598  326490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:54.953711  326490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 12:18:54.941671438 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:54.953923  326490 docker.go:318] overlay module found
	I1018 12:18:54.958794  326490 out.go:179] * Using the docker driver based on user configuration
	I1018 12:18:54.960013  326490 start.go:305] selected driver: docker
	I1018 12:18:54.960033  326490 start.go:925] validating driver "docker" against <nil>
	I1018 12:18:54.960046  326490 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:18:54.960615  326490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:55.022513  326490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 12:18:55.011731081 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:55.022798  326490 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 12:18:55.022840  326490 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 12:18:55.023141  326490 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 12:18:55.025322  326490 out.go:179] * Using Docker driver with root privileges
	I1018 12:18:55.026401  326490 cni.go:84] Creating CNI manager for ""
	I1018 12:18:55.026484  326490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:18:55.026498  326490 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:18:55.026560  326490 start.go:349] cluster config:
	{Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:55.027938  326490 out.go:179] * Starting "newest-cni-579606" primary control-plane node in "newest-cni-579606" cluster
	I1018 12:18:55.029100  326490 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:18:55.030360  326490 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:18:55.031422  326490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:55.031468  326490 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 12:18:55.031489  326490 cache.go:58] Caching tarball of preloaded images
	I1018 12:18:55.031522  326490 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:18:55.031591  326490 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:18:55.031603  326490 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:18:55.031705  326490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json ...
	I1018 12:18:55.031726  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json: {Name:mk20e362fc30401f09fc034ac5a55088adce3cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:55.053307  326490 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:18:55.053326  326490 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:18:55.053342  326490 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:18:55.053373  326490 start.go:360] acquireMachinesLock for newest-cni-579606: {Name:mk4161cf0bf2eb93a8110dc388332ec9ca8fc5ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:18:55.053467  326490 start.go:364] duration metric: took 78.123µs to acquireMachinesLock for "newest-cni-579606"
	I1018 12:18:55.053489  326490 start.go:93] Provisioning new machine with config: &{Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:18:55.053550  326490 start.go:125] createHost starting for "" (driver="docker")
	W1018 12:18:51.958241  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:18:53.959108  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:18:55.846032  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:58.346225  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:55.055345  326490 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 12:18:55.055547  326490 start.go:159] libmachine.API.Create for "newest-cni-579606" (driver="docker")
	I1018 12:18:55.055575  326490 client.go:168] LocalClient.Create starting
	I1018 12:18:55.055636  326490 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem
	I1018 12:18:55.055669  326490 main.go:141] libmachine: Decoding PEM data...
	I1018 12:18:55.055683  326490 main.go:141] libmachine: Parsing certificate...
	I1018 12:18:55.055736  326490 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem
	I1018 12:18:55.055773  326490 main.go:141] libmachine: Decoding PEM data...
	I1018 12:18:55.055796  326490 main.go:141] libmachine: Parsing certificate...
	I1018 12:18:55.056153  326490 cli_runner.go:164] Run: docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 12:18:55.073803  326490 cli_runner.go:211] docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 12:18:55.073868  326490 network_create.go:284] running [docker network inspect newest-cni-579606] to gather additional debugging logs...
	I1018 12:18:55.073887  326490 cli_runner.go:164] Run: docker network inspect newest-cni-579606
	W1018 12:18:55.092574  326490 cli_runner.go:211] docker network inspect newest-cni-579606 returned with exit code 1
	I1018 12:18:55.092605  326490 network_create.go:287] error running [docker network inspect newest-cni-579606]: docker network inspect newest-cni-579606: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-579606 not found
	I1018 12:18:55.092623  326490 network_create.go:289] output of [docker network inspect newest-cni-579606]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-579606 not found
	
	** /stderr **
	I1018 12:18:55.092788  326490 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:18:55.111259  326490 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1c78aef7d2ee IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:19:5a:10:36:f4} reservation:<nil>}
	I1018 12:18:55.111908  326490 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6069a4ec9777 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:f7:2a:6b:48:b9} reservation:<nil>}
	I1018 12:18:55.112751  326490 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-670e794a7c9f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:d0:78:df:c7:fd} reservation:<nil>}
	I1018 12:18:55.113423  326490 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8bb34d522296 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:fc:1a:65:23:03} reservation:<nil>}
	I1018 12:18:55.114281  326490 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc7b00}
	I1018 12:18:55.114303  326490 network_create.go:124] attempt to create docker network newest-cni-579606 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 12:18:55.114345  326490 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-579606 newest-cni-579606
	I1018 12:18:55.175643  326490 network_create.go:108] docker network newest-cni-579606 192.168.85.0/24 created
	I1018 12:18:55.175691  326490 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-579606" container
	I1018 12:18:55.175752  326490 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 12:18:55.193582  326490 cli_runner.go:164] Run: docker volume create newest-cni-579606 --label name.minikube.sigs.k8s.io=newest-cni-579606 --label created_by.minikube.sigs.k8s.io=true
	I1018 12:18:55.212499  326490 oci.go:103] Successfully created a docker volume newest-cni-579606
	I1018 12:18:55.212595  326490 cli_runner.go:164] Run: docker run --rm --name newest-cni-579606-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-579606 --entrypoint /usr/bin/test -v newest-cni-579606:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 12:18:55.635994  326490 oci.go:107] Successfully prepared a docker volume newest-cni-579606
	I1018 12:18:55.636038  326490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:55.636063  326490 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 12:18:55.636128  326490 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-579606:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 12:18:56.458229  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:18:58.958191  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	I1018 12:19:00.126774  326490 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-579606:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.490575425s)
	I1018 12:19:00.126807  326490 kic.go:203] duration metric: took 4.4907405s to extract preloaded images to volume ...
	W1018 12:19:00.126891  326490 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 12:19:00.126924  326490 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 12:19:00.126991  326490 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 12:19:00.190480  326490 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-579606 --name newest-cni-579606 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-579606 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-579606 --network newest-cni-579606 --ip 192.168.85.2 --volume newest-cni-579606:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 12:19:00.476973  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Running}}
	I1018 12:19:00.495553  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:00.516545  326490 cli_runner.go:164] Run: docker exec newest-cni-579606 stat /var/lib/dpkg/alternatives/iptables
	I1018 12:19:00.562561  326490 oci.go:144] the created container "newest-cni-579606" has a running status.
	I1018 12:19:00.562609  326490 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa...
	I1018 12:19:00.820117  326490 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 12:19:00.854117  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:00.877422  326490 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 12:19:00.877449  326490 kic_runner.go:114] Args: [docker exec --privileged newest-cni-579606 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 12:19:00.925342  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:00.944520  326490 machine.go:93] provisionDockerMachine start ...
	I1018 12:19:00.944616  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:00.964493  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:00.964838  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:00.964858  326490 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:19:01.103775  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-579606
	
	I1018 12:19:01.103807  326490 ubuntu.go:182] provisioning hostname "newest-cni-579606"
	I1018 12:19:01.103880  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.124094  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:01.124376  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:01.124392  326490 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-579606 && echo "newest-cni-579606" | sudo tee /etc/hostname
	I1018 12:19:01.270628  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-579606
	
	I1018 12:19:01.270703  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.289410  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:01.289674  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:01.289696  326490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-579606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-579606/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-579606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:19:01.423556  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:19:01.423583  326490 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:19:01.423603  326490 ubuntu.go:190] setting up certificates
	I1018 12:19:01.423619  326490 provision.go:84] configureAuth start
	I1018 12:19:01.423685  326490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:01.442627  326490 provision.go:143] copyHostCerts
	I1018 12:19:01.442683  326490 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:19:01.442692  326490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:19:01.442779  326490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:19:01.442877  326490 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:19:01.442887  326490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:19:01.442920  326490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:19:01.443028  326490 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:19:01.443058  326490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:19:01.443088  326490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:19:01.443142  326490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.newest-cni-579606 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-579606]
	I1018 12:19:01.605969  326490 provision.go:177] copyRemoteCerts
	I1018 12:19:01.606038  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:19:01.606085  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.625297  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:01.723582  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 12:19:01.744640  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:19:01.763599  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 12:19:01.784423  326490 provision.go:87] duration metric: took 360.788993ms to configureAuth
	I1018 12:19:01.784458  326490 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:19:01.784652  326490 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:01.784752  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.804299  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:01.804508  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:01.804524  326490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:19:02.051413  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:19:02.051436  326490 machine.go:96] duration metric: took 1.106891251s to provisionDockerMachine
	I1018 12:19:02.051444  326490 client.go:171] duration metric: took 6.995862509s to LocalClient.Create
	I1018 12:19:02.051460  326490 start.go:167] duration metric: took 6.995914544s to libmachine.API.Create "newest-cni-579606"
	I1018 12:19:02.051470  326490 start.go:293] postStartSetup for "newest-cni-579606" (driver="docker")
	I1018 12:19:02.051482  326490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:19:02.051542  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:19:02.051582  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.069826  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.169332  326490 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:19:02.173028  326490 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:19:02.173060  326490 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:19:02.173075  326490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:19:02.173131  326490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:19:02.173202  326490 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:19:02.173312  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:19:02.181632  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:19:02.201730  326490 start.go:296] duration metric: took 150.246741ms for postStartSetup
	I1018 12:19:02.202117  326490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:02.220168  326490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json ...
	I1018 12:19:02.220438  326490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:19:02.220477  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.238665  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.333039  326490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:19:02.337804  326490 start.go:128] duration metric: took 7.284234042s to createHost
	I1018 12:19:02.337830  326490 start.go:83] releasing machines lock for "newest-cni-579606", held for 7.284352735s
	I1018 12:19:02.337891  326490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:02.357339  326490 ssh_runner.go:195] Run: cat /version.json
	I1018 12:19:02.357373  326490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:19:02.357386  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.357430  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.376606  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.377490  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.526194  326490 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:02.532929  326490 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:19:02.568991  326490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:19:02.574362  326490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:19:02.574428  326490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:19:02.602949  326490 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 12:19:02.602987  326490 start.go:495] detecting cgroup driver to use...
	I1018 12:19:02.603019  326490 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:19:02.603065  326490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:19:02.619432  326490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:19:02.632985  326490 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:19:02.633047  326490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:19:02.650953  326490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:19:02.670802  326490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:19:02.756116  326490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:19:02.848839  326490 docker.go:234] disabling docker service ...
	I1018 12:19:02.848900  326490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:19:02.868131  326490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:19:02.881575  326490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:19:02.965443  326490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:19:03.051508  326490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:19:03.064380  326490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:19:03.079484  326490 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:19:03.079554  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.090169  326490 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:19:03.090229  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.099749  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.109431  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.118802  326490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:19:03.127410  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.136357  326490 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.151150  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.160956  326490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:19:03.169094  326490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:19:03.177522  326490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:03.257714  326490 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:19:03.374283  326490 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:19:03.374356  326490 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:19:03.378571  326490 start.go:563] Will wait 60s for crictl version
	I1018 12:19:03.378624  326490 ssh_runner.go:195] Run: which crictl
	I1018 12:19:03.382638  326490 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:19:03.406896  326490 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:19:03.406996  326490 ssh_runner.go:195] Run: crio --version
	I1018 12:19:03.436202  326490 ssh_runner.go:195] Run: crio --version
	I1018 12:19:03.466606  326490 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:19:03.468046  326490 cli_runner.go:164] Run: docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:19:03.485613  326490 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 12:19:03.489792  326490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:19:03.502123  326490 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1018 12:19:00.846128  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:19:03.345904  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:19:03.503451  326490 kubeadm.go:883] updating cluster {Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:19:03.503568  326490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:19:03.503623  326490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:19:03.537963  326490 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:19:03.537988  326490 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:19:03.538037  326490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:19:03.564020  326490 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:19:03.564061  326490 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:19:03.564071  326490 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 12:19:03.564172  326490 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-579606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:19:03.564251  326490 ssh_runner.go:195] Run: crio config
	I1018 12:19:03.609404  326490 cni.go:84] Creating CNI manager for ""
	I1018 12:19:03.609430  326490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:19:03.609446  326490 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 12:19:03.609473  326490 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-579606 NodeName:newest-cni-579606 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:19:03.609666  326490 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-579606"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:19:03.609744  326490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:19:03.618201  326490 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:19:03.618283  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:19:03.626679  326490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 12:19:03.639983  326490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:19:03.655953  326490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1018 12:19:03.668846  326490 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:19:03.672666  326490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:19:03.683073  326490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:03.766600  326490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:19:03.797248  326490 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606 for IP: 192.168.85.2
	I1018 12:19:03.797269  326490 certs.go:195] generating shared ca certs ...
	I1018 12:19:03.797296  326490 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:03.797445  326490 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:19:03.797500  326490 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:19:03.797513  326490 certs.go:257] generating profile certs ...
	I1018 12:19:03.797585  326490 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key
	I1018 12:19:03.797609  326490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.crt with IP's: []
	I1018 12:19:04.196975  326490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.crt ...
	I1018 12:19:04.197011  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.crt: {Name:mka42a654d079c2a23058a0f14154e8b79ca5459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.197222  326490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key ...
	I1018 12:19:04.197241  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key: {Name:mk220b04a2afae0bcb10852575c558c1404f1005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.197355  326490 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad
	I1018 12:19:04.197378  326490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 12:19:04.310285  326490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad ...
	I1018 12:19:04.310312  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad: {Name:mke978bbcfe8f1a2cbf3531371f43b4028ef678e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.310509  326490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad ...
	I1018 12:19:04.310528  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad: {Name:mk42b24c0f6b076eda0e07dce8424a94f5271da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.310658  326490 certs.go:382] copying /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad -> /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt
	I1018 12:19:04.310784  326490 certs.go:386] copying /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad -> /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key
	I1018 12:19:04.310873  326490 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key
	I1018 12:19:04.310898  326490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt with IP's: []
	I1018 12:19:04.385339  326490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt ...
	I1018 12:19:04.385370  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt: {Name:mk66f445c5bca9cdd3c55e6ee197ee7cb14dae9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.385567  326490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key ...
	I1018 12:19:04.385584  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key: {Name:mk29fee630df834569bfa6e21a7cc861705c1451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.385849  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem (1338 bytes)
	W1018 12:19:04.385893  326490 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360_empty.pem, impossibly tiny 0 bytes
	I1018 12:19:04.385908  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:19:04.385940  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:19:04.385972  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:19:04.386016  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 12:19:04.386076  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:19:04.386584  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:19:04.405651  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:19:04.423574  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:19:04.441442  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:19:04.460483  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 12:19:04.478325  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:19:04.496004  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:19:04.514077  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:19:04.532154  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem --> /usr/share/ca-certificates/9360.pem (1338 bytes)
	I1018 12:19:04.552898  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /usr/share/ca-certificates/93602.pem (1708 bytes)
	I1018 12:19:04.572871  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:19:04.593879  326490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:19:04.608514  326490 ssh_runner.go:195] Run: openssl version
	I1018 12:19:04.615149  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93602.pem && ln -fs /usr/share/ca-certificates/93602.pem /etc/ssl/certs/93602.pem"
	I1018 12:19:04.624305  326490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93602.pem
	I1018 12:19:04.628375  326490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/93602.pem
	I1018 12:19:04.628425  326490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93602.pem
	I1018 12:19:04.663623  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93602.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:19:04.673411  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:19:04.682605  326490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:04.686974  326490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:04.687061  326490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:04.724063  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:19:04.733543  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9360.pem && ln -fs /usr/share/ca-certificates/9360.pem /etc/ssl/certs/9360.pem"
	I1018 12:19:04.742538  326490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9360.pem
	I1018 12:19:04.746549  326490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9360.pem
	I1018 12:19:04.746601  326490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9360.pem
	I1018 12:19:04.781517  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9360.pem /etc/ssl/certs/51391683.0"
	I1018 12:19:04.791034  326490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:19:04.794955  326490 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:19:04.795012  326490 kubeadm.go:400] StartCluster: {Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:19:04.795092  326490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:04.795154  326490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:04.823284  326490 cri.go:89] found id: ""
	I1018 12:19:04.823356  326490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:19:04.832075  326490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:19:04.840408  326490 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 12:19:04.840478  326490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	W1018 12:19:00.958896  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:03.459593  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:05.845166  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:19:07.344832  317167 pod_ready.go:94] pod "coredns-66bc5c9577-7qgqj" is "Ready"
	I1018 12:19:07.344882  317167 pod_ready.go:86] duration metric: took 37.505154401s for pod "coredns-66bc5c9577-7qgqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.347549  317167 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.351825  317167 pod_ready.go:94] pod "etcd-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:07.351851  317167 pod_ready.go:86] duration metric: took 4.270969ms for pod "etcd-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.353893  317167 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.357781  317167 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:07.357802  317167 pod_ready.go:86] duration metric: took 3.889439ms for pod "kube-apiserver-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.359743  317167 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.543689  317167 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:07.543718  317167 pod_ready.go:86] duration metric: took 183.92899ms for pod "kube-controller-manager-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.742726  317167 pod_ready.go:83] waiting for pod "kube-proxy-bffkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.142748  317167 pod_ready.go:94] pod "kube-proxy-bffkr" is "Ready"
	I1018 12:19:08.142797  317167 pod_ready.go:86] duration metric: took 400.045074ms for pod "kube-proxy-bffkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.343168  317167 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.743587  317167 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:08.743618  317167 pod_ready.go:86] duration metric: took 400.420854ms for pod "kube-scheduler-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.743633  317167 pod_ready.go:40] duration metric: took 38.908363338s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:19:08.790224  317167 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:08.792295  317167 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-028309" cluster and "default" namespace by default
	I1018 12:19:04.849545  326490 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:19:04.849562  326490 kubeadm.go:157] found existing configuration files:
	
	I1018 12:19:04.849600  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:19:04.857827  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:19:04.857889  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:19:04.865939  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:19:04.873915  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:19:04.873983  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:19:04.881861  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:19:04.890019  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:19:04.890088  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:19:04.898082  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:19:04.906181  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:19:04.906236  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:19:04.914044  326490 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 12:19:04.975919  326490 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 12:19:05.037824  326490 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1018 12:19:05.957990  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:07.958857  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:09.958915  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:12.459097  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	I1018 12:19:14.458133  319485 pod_ready.go:94] pod "coredns-66bc5c9577-b6h9l" is "Ready"
	I1018 12:19:14.458159  319485 pod_ready.go:86] duration metric: took 31.505202758s for pod "coredns-66bc5c9577-b6h9l" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.459959  319485 pod_ready.go:83] waiting for pod "etcd-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.463248  319485 pod_ready.go:94] pod "etcd-embed-certs-175371" is "Ready"
	I1018 12:19:14.463270  319485 pod_ready.go:86] duration metric: took 3.284914ms for pod "etcd-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.465089  319485 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.468551  319485 pod_ready.go:94] pod "kube-apiserver-embed-certs-175371" is "Ready"
	I1018 12:19:14.468570  319485 pod_ready.go:86] duration metric: took 3.458555ms for pod "kube-apiserver-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.470303  319485 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.657339  319485 pod_ready.go:94] pod "kube-controller-manager-embed-certs-175371" is "Ready"
	I1018 12:19:14.657367  319485 pod_ready.go:86] duration metric: took 187.044696ms for pod "kube-controller-manager-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.856446  319485 pod_ready.go:83] waiting for pod "kube-proxy-t2x4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.257025  319485 pod_ready.go:94] pod "kube-proxy-t2x4c" is "Ready"
	I1018 12:19:15.257053  319485 pod_ready.go:86] duration metric: took 400.581639ms for pod "kube-proxy-t2x4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.456953  319485 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.893038  326490 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 12:19:15.893090  326490 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 12:19:15.893217  326490 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 12:19:15.893353  326490 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 12:19:15.893498  326490 kubeadm.go:318] OS: Linux
	I1018 12:19:15.893566  326490 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 12:19:15.893627  326490 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 12:19:15.893696  326490 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 12:19:15.893776  326490 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 12:19:15.893850  326490 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 12:19:15.893910  326490 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 12:19:15.893969  326490 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 12:19:15.894035  326490 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 12:19:15.894133  326490 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 12:19:15.894281  326490 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 12:19:15.894412  326490 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 12:19:15.894516  326490 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 12:19:15.896254  326490 out.go:252]   - Generating certificates and keys ...
	I1018 12:19:15.896337  326490 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 12:19:15.896412  326490 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 12:19:15.896489  326490 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 12:19:15.896543  326490 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:19:15.896599  326490 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:19:15.896657  326490 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:19:15.896708  326490 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:19:15.896861  326490 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-579606] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 12:19:15.896916  326490 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:19:15.897021  326490 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-579606] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 12:19:15.897080  326490 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:19:15.897134  326490 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:19:15.897176  326490 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:19:15.897227  326490 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:19:15.897280  326490 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:19:15.897332  326490 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:19:15.897378  326490 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:19:15.897435  326490 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:19:15.897486  326490 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:19:15.897560  326490 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:19:15.897622  326490 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 12:19:15.899813  326490 out.go:252]   - Booting up control plane ...
	I1018 12:19:15.899904  326490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:19:15.899977  326490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:19:15.900053  326490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:19:15.900169  326490 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:19:15.900307  326490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:19:15.900475  326490 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:19:15.900586  326490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:19:15.900647  326490 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:19:15.900835  326490 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:19:15.900980  326490 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:19:15.901059  326490 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501237256s
	I1018 12:19:15.901160  326490 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:19:15.901257  326490 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 12:19:15.901388  326490 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:19:15.901499  326490 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:19:15.901562  326490 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.520322183s
	I1018 12:19:15.901615  326490 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.051874304s
	I1018 12:19:15.901668  326490 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001667177s
	I1018 12:19:15.901817  326490 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:19:15.902084  326490 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:19:15.902160  326490 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:19:15.902393  326490 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-579606 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:19:15.902484  326490 kubeadm.go:318] [bootstrap-token] Using token: pmkr01.67na6m3iuf7b6wke
	I1018 12:19:15.904615  326490 out.go:252]   - Configuring RBAC rules ...
	I1018 12:19:15.904796  326490 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 12:19:15.904875  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 12:19:15.905028  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 12:19:15.905156  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 12:19:15.905290  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 12:19:15.905391  326490 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 12:19:15.905553  326490 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 12:19:15.905613  326490 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 12:19:15.905676  326490 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 12:19:15.905684  326490 kubeadm.go:318] 
	I1018 12:19:15.905730  326490 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 12:19:15.905736  326490 kubeadm.go:318] 
	I1018 12:19:15.905836  326490 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 12:19:15.905852  326490 kubeadm.go:318] 
	I1018 12:19:15.905891  326490 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 12:19:15.905967  326490 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 12:19:15.906032  326490 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 12:19:15.906040  326490 kubeadm.go:318] 
	I1018 12:19:15.906120  326490 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 12:19:15.906130  326490 kubeadm.go:318] 
	I1018 12:19:15.906195  326490 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 12:19:15.906216  326490 kubeadm.go:318] 
	I1018 12:19:15.906289  326490 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 12:19:15.906393  326490 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 12:19:15.906490  326490 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 12:19:15.906500  326490 kubeadm.go:318] 
	I1018 12:19:15.906596  326490 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 12:19:15.906826  326490 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 12:19:15.906844  326490 kubeadm.go:318] 
	I1018 12:19:15.906936  326490 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token pmkr01.67na6m3iuf7b6wke \
	I1018 12:19:15.907119  326490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d \
	I1018 12:19:15.907164  326490 kubeadm.go:318] 	--control-plane 
	I1018 12:19:15.907173  326490 kubeadm.go:318] 
	I1018 12:19:15.907323  326490 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 12:19:15.907337  326490 kubeadm.go:318] 
	I1018 12:19:15.907436  326490 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token pmkr01.67na6m3iuf7b6wke \
	I1018 12:19:15.907606  326490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d 
	I1018 12:19:15.907623  326490 cni.go:84] Creating CNI manager for ""
	I1018 12:19:15.907632  326490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:19:15.857063  319485 pod_ready.go:94] pod "kube-scheduler-embed-certs-175371" is "Ready"
	I1018 12:19:15.857091  319485 pod_ready.go:86] duration metric: took 400.110605ms for pod "kube-scheduler-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.857103  319485 pod_ready.go:40] duration metric: took 32.907623738s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:19:15.908233  319485 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:15.909420  326490 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 12:19:15.910368  319485 out.go:179] * Done! kubectl is now configured to use "embed-certs-175371" cluster and "default" namespace by default
	I1018 12:19:15.911428  326490 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 12:19:15.916203  326490 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 12:19:15.916223  326490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 12:19:15.930716  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 12:19:16.186811  326490 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:19:16.186877  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:16.186927  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-579606 minikube.k8s.io/updated_at=2025_10_18T12_19_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=newest-cni-579606 minikube.k8s.io/primary=true
	I1018 12:19:16.200483  326490 ops.go:34] apiserver oom_adj: -16
	I1018 12:19:16.289962  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:16.790297  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:17.290815  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:17.790675  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:18.290971  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:18.791051  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:19.291007  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:19.790041  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:20.290948  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:20.364194  326490 kubeadm.go:1113] duration metric: took 4.177366872s to wait for elevateKubeSystemPrivileges
	I1018 12:19:20.364236  326490 kubeadm.go:402] duration metric: took 15.569226889s to StartCluster
	I1018 12:19:20.364257  326490 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:20.364341  326490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:19:20.366539  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:20.366808  326490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 12:19:20.366823  326490 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:19:20.366886  326490 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:19:20.366978  326490 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-579606"
	I1018 12:19:20.366998  326490 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-579606"
	I1018 12:19:20.367029  326490 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:20.367046  326490 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:20.367047  326490 addons.go:69] Setting default-storageclass=true in profile "newest-cni-579606"
	I1018 12:19:20.367088  326490 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-579606"
	I1018 12:19:20.367465  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:20.367552  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:20.368575  326490 out.go:179] * Verifying Kubernetes components...
	I1018 12:19:20.370326  326490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:20.394477  326490 addons.go:238] Setting addon default-storageclass=true in "newest-cni-579606"
	I1018 12:19:20.394522  326490 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:20.394869  326490 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:19:20.395017  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:20.396676  326490 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:19:20.396702  326490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:19:20.396772  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:20.423305  326490 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:19:20.423405  326490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:19:20.423499  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:20.423817  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:20.453744  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:20.465106  326490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 12:19:20.532388  326490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:19:20.546306  326490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:19:20.568683  326490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:19:20.669063  326490 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 12:19:20.670556  326490 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:19:20.670609  326490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:19:20.899558  326490 api_server.go:72] duration metric: took 532.701277ms to wait for apiserver process to appear ...
	I1018 12:19:20.899596  326490 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:19:20.899623  326490 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:20.906703  326490 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 12:19:20.907612  326490 api_server.go:141] control plane version: v1.34.1
	I1018 12:19:20.907641  326490 api_server.go:131] duration metric: took 8.037799ms to wait for apiserver health ...
	I1018 12:19:20.907652  326490 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:19:20.909941  326490 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 12:19:20.911175  326490 addons.go:514] duration metric: took 544.288646ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 12:19:20.911194  326490 system_pods.go:59] 8 kube-system pods found
	I1018 12:19:20.911217  326490 system_pods.go:61] "coredns-66bc5c9577-p6bts" [49609244-6dc2-4950-8fad-8240b827ecca] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 12:19:20.911224  326490 system_pods.go:61] "etcd-newest-cni-579606" [496c00b4-7ad1-40c0-a440-c396a752cbf4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:19:20.911231  326490 system_pods.go:61] "kindnet-2c4t6" [08c0018d-0f0f-435e-8868-31818d5639fa] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:19:20.911238  326490 system_pods.go:61] "kube-apiserver-newest-cni-579606" [a39961c7-019e-41ec-8843-e98e9c2e3604] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:19:20.911249  326490 system_pods.go:61] "kube-controller-manager-newest-cni-579606" [992bd82d-6489-43da-83ba-8dcb6b86fe48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:19:20.911262  326490 system_pods.go:61] "kube-proxy-5hjgn" [915df613-23ce-49e2-b125-d223024077b0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:19:20.911291  326490 system_pods.go:61] "kube-scheduler-newest-cni-579606" [2a1de39e-4fa6-49e8-a420-75a6c82ac73e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:19:20.911306  326490 system_pods.go:61] "storage-provisioner" [c7ff4c04-56e5-469b-9af2-dc1bf4fe969d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 12:19:20.911314  326490 system_pods.go:74] duration metric: took 3.655766ms to wait for pod list to return data ...
	I1018 12:19:20.911324  326490 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:19:20.913681  326490 default_sa.go:45] found service account: "default"
	I1018 12:19:20.913702  326490 default_sa.go:55] duration metric: took 2.371901ms for default service account to be created ...
	I1018 12:19:20.913712  326490 kubeadm.go:586] duration metric: took 546.861004ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 12:19:20.913730  326490 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:19:20.916084  326490 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:19:20.916105  326490 node_conditions.go:123] node cpu capacity is 8
	I1018 12:19:20.916117  326490 node_conditions.go:105] duration metric: took 2.382506ms to run NodePressure ...
	I1018 12:19:20.916128  326490 start.go:241] waiting for startup goroutines ...
	I1018 12:19:21.173827  326490 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-579606" context rescaled to 1 replicas
	I1018 12:19:21.173870  326490 start.go:246] waiting for cluster config update ...
	I1018 12:19:21.173882  326490 start.go:255] writing updated cluster config ...
	I1018 12:19:21.174193  326490 ssh_runner.go:195] Run: rm -f paused
	I1018 12:19:21.223166  326490 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:21.225317  326490 out.go:179] * Done! kubectl is now configured to use "newest-cni-579606" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.650725734Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.654003968Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6894e9e3-716e-4301-8dc9-409004867ef0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.654754069Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a1aa57ee-fb02-47e7-9cf6-af2ac112cccb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.655478722Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.656104924Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.656109906Z" level=info msg="Ran pod sandbox 263a0eac3ae2e8a784926f608f8f6b30109d1d0ec94e738070843eb51facb604 with infra container: kube-system/kube-proxy-5hjgn/POD" id=6894e9e3-716e-4301-8dc9-409004867ef0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.656943829Z" level=info msg="Ran pod sandbox f1ae613abab10c8247cd55b6ade968e73b750c4901f5b2e6828ec2a44bb271e7 with infra container: kube-system/kindnet-2c4t6/POD" id=a1aa57ee-fb02-47e7-9cf6-af2ac112cccb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.657819454Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d5f200c2-fd73-445b-9371-a5cfc41c09a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.658329042Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=fcb1b752-a94b-44a0-baf0-3dc252c10a37 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.659646571Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=511202bc-a68e-4f78-b753-6bc5e4b47195 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.659910191Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=05a41b5b-d045-4b8f-bba7-fe11538d1588 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.66383065Z" level=info msg="Creating container: kube-system/kube-proxy-5hjgn/kube-proxy" id=37751cf2-19a9-4b2d-a8d3-e84af12cf0be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.664063936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.665492774Z" level=info msg="Creating container: kube-system/kindnet-2c4t6/kindnet-cni" id=a4498d7f-9a9c-4588-9eac-a0c398f158e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.666496205Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.669378457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.669886023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.670718163Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.67112486Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.699212423Z" level=info msg="Created container 8c5894667c64d75ee2080e4db3ea660e7dd6e46e8858f9094ce7cd9ae7be5882: kube-system/kindnet-2c4t6/kindnet-cni" id=a4498d7f-9a9c-4588-9eac-a0c398f158e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.700084701Z" level=info msg="Starting container: 8c5894667c64d75ee2080e4db3ea660e7dd6e46e8858f9094ce7cd9ae7be5882" id=af9dd69c-715c-420a-b248-acacc69771a7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.700677386Z" level=info msg="Created container 029de6317def1b65546877e7d9dc64f0a2758cf4914bd9834755cd0293fbb100: kube-system/kube-proxy-5hjgn/kube-proxy" id=37751cf2-19a9-4b2d-a8d3-e84af12cf0be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.701268326Z" level=info msg="Starting container: 029de6317def1b65546877e7d9dc64f0a2758cf4914bd9834755cd0293fbb100" id=1aa22634-1133-4560-99ca-c7018f76b4b6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.702377847Z" level=info msg="Started container" PID=1634 containerID=8c5894667c64d75ee2080e4db3ea660e7dd6e46e8858f9094ce7cd9ae7be5882 description=kube-system/kindnet-2c4t6/kindnet-cni id=af9dd69c-715c-420a-b248-acacc69771a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f1ae613abab10c8247cd55b6ade968e73b750c4901f5b2e6828ec2a44bb271e7
	Oct 18 12:19:21 newest-cni-579606 crio[777]: time="2025-10-18T12:19:21.70488915Z" level=info msg="Started container" PID=1633 containerID=029de6317def1b65546877e7d9dc64f0a2758cf4914bd9834755cd0293fbb100 description=kube-system/kube-proxy-5hjgn/kube-proxy id=1aa22634-1133-4560-99ca-c7018f76b4b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=263a0eac3ae2e8a784926f608f8f6b30109d1d0ec94e738070843eb51facb604
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8c5894667c64d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   Less than a second ago   Running             kindnet-cni               0                   f1ae613abab10       kindnet-2c4t6                               kube-system
	029de6317def1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   Less than a second ago   Running             kube-proxy                0                   263a0eac3ae2e       kube-proxy-5hjgn                            kube-system
	5f52863e4a651       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago           Running             kube-controller-manager   0                   01c11b34e1327       kube-controller-manager-newest-cni-579606   kube-system
	a6c4abf6cd207       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago           Running             kube-scheduler            0                   a09da85e938bf       kube-scheduler-newest-cni-579606            kube-system
	088d8cbc259d7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago           Running             kube-apiserver            0                   4da8d178c8154       kube-apiserver-newest-cni-579606            kube-system
	d6a43006777b6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago           Running             etcd                      0                   67e65ec2110ef       etcd-newest-cni-579606                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-579606
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-579606
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=newest-cni-579606
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_19_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:19:12 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-579606
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:19:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:19:15 +0000   Sat, 18 Oct 2025 12:19:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:19:15 +0000   Sat, 18 Oct 2025 12:19:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:19:15 +0000   Sat, 18 Oct 2025 12:19:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 12:19:15 +0000   Sat, 18 Oct 2025 12:19:10 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-579606
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                36059274-aa96-46ac-88d0-180e17b44739
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-579606                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-2c4t6                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-579606             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-579606    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-5hjgn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-579606             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 0s                 kube-proxy       
	  Normal  Starting                 13s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x8 over 13s)  kubelet          Node newest-cni-579606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 13s)  kubelet          Node newest-cni-579606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x8 over 13s)  kubelet          Node newest-cni-579606 status is now: NodeHasSufficientPID
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s                 kubelet          Node newest-cni-579606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet          Node newest-cni-579606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet          Node newest-cni-579606 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-579606 event: Registered Node newest-cni-579606 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +11.948953] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 93 07 de 40 6d 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 2f a5 3a 37 fc 08 06
	[  +0.204454] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 4b 47 1f ce e5 08 06
	[Oct18 12:16] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 88 62 1b dd a7 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 f1 aa 42 b3 1d 08 06
	[  +0.000901] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +26.035563] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 9e 15 3f 0e e1 08 06
	[  +0.000631] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 55 46 ae a1 7f 08 06
	[  +2.492998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 63 10 7e 7b f1 08 06
	[  +0.001695] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	[ +18.118461] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e eb 77 72 c6 18 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	
	
	==> etcd [d6a43006777b62fd6296e1db1714ae8c04b5b780312ab4205cf76a963d3d8503] <==
	{"level":"warn","ts":"2025-10-18T12:19:11.857933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.865725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.874720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.881518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.897624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.902111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.909548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.917919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.925113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.931957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.938167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.945160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.958602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.966064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.973602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.980311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.987121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:11.993750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:12.001928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:12.009190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:12.015516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:12.036307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:12.044918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:12.051319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:12.105339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59418","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:19:22 up  1:01,  0 user,  load average: 3.14, 3.86, 2.60
	Linux newest-cni-579606 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8c5894667c64d75ee2080e4db3ea660e7dd6e46e8858f9094ce7cd9ae7be5882] <==
	I1018 12:19:21.976847       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:19:21.977192       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 12:19:21.977394       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:19:21.977417       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:19:21.977449       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:19:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:19:22.182592       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:19:22.182609       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:19:22.182620       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:19:22.182722       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [088d8cbc259d7f80d5987dafd22c0fb7b7a9919739793c946ca4985f04e71866] <==
	I1018 12:19:12.583718       1 cache.go:39] Caches are synced for autoregister controller
	I1018 12:19:12.585093       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:19:12.587142       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:19:12.587149       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 12:19:12.593062       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:19:12.593419       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 12:19:12.611930       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 12:19:12.779490       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:19:13.487850       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 12:19:13.491994       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 12:19:13.492016       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:19:13.990943       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:19:14.032199       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:19:14.091414       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 12:19:14.097931       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1018 12:19:14.099286       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:19:14.104350       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:19:14.523126       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:19:15.293989       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:19:15.305668       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 12:19:15.316474       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:19:19.525839       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 12:19:20.179403       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:19:20.184115       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:19:20.228562       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5f52863e4a651fcb80a5bc635c2d727eb1d47c66339cf08674fbd29fde578432] <==
	I1018 12:19:19.523063       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 12:19:19.523069       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 12:19:19.523111       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 12:19:19.523137       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 12:19:19.523211       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 12:19:19.523590       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 12:19:19.523607       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:19:19.523643       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 12:19:19.523673       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 12:19:19.523681       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:19:19.523741       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 12:19:19.523876       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 12:19:19.523972       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 12:19:19.523996       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 12:19:19.524054       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:19:19.524151       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 12:19:19.524167       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 12:19:19.524693       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 12:19:19.524908       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 12:19:19.527329       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 12:19:19.529398       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:19:19.529840       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:19:19.533194       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 12:19:19.540778       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:19:19.540792       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	
	
	==> kube-proxy [029de6317def1b65546877e7d9dc64f0a2758cf4914bd9834755cd0293fbb100] <==
	I1018 12:19:21.744147       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:19:21.799076       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:19:21.899616       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:19:21.899654       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 12:19:21.899747       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:19:21.920426       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:19:21.920491       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:19:21.926157       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:19:21.926521       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:19:21.926545       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:19:21.929901       1 config.go:200] "Starting service config controller"
	I1018 12:19:21.929926       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:19:21.929962       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:19:21.929969       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:19:21.930009       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:19:21.930015       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:19:21.930025       1 config.go:309] "Starting node config controller"
	I1018 12:19:21.930040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:19:22.030655       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:19:22.030692       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:19:22.030704       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:19:22.030773       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [a6c4abf6cd207d67482ad197d32d8c6b20e3ac4d0cdd1548af5c9823e0fb952f] <==
	E1018 12:19:12.528074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:19:12.528186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:19:12.528214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:19:12.528304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:19:12.528328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:19:12.528495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:19:12.528510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:19:12.528538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:19:12.528673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:19:13.365645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:19:13.415059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:19:13.418174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:19:13.464737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:19:13.470016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:19:13.477351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:19:13.518453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:19:13.529229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:19:13.586380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:19:13.622872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:19:13.700396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:19:13.738840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:19:13.748886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 12:19:13.777954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:19:13.784084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1018 12:19:15.425137       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:19:16 newest-cni-579606 kubelet[1316]: I1018 12:19:16.237051    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-579606" podStartSLOduration=1.237026795 podStartE2EDuration="1.237026795s" podCreationTimestamp="2025-10-18 12:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:19:16.224073991 +0000 UTC m=+1.186988994" watchObservedRunningTime="2025-10-18 12:19:16.237026795 +0000 UTC m=+1.199941798"
	Oct 18 12:19:19 newest-cni-579606 kubelet[1316]: I1018 12:19:19.551438    1316 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 12:19:19 newest-cni-579606 kubelet[1316]: I1018 12:19:19.552101    1316 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 12:19:19 newest-cni-579606 kubelet[1316]: I1018 12:19:19.645068    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/08c0018d-0f0f-435e-8868-31818d5639fa-cni-cfg\") pod \"kindnet-2c4t6\" (UID: \"08c0018d-0f0f-435e-8868-31818d5639fa\") " pod="kube-system/kindnet-2c4t6"
	Oct 18 12:19:19 newest-cni-579606 kubelet[1316]: I1018 12:19:19.645176    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5fbr\" (UniqueName: \"kubernetes.io/projected/08c0018d-0f0f-435e-8868-31818d5639fa-kube-api-access-w5fbr\") pod \"kindnet-2c4t6\" (UID: \"08c0018d-0f0f-435e-8868-31818d5639fa\") " pod="kube-system/kindnet-2c4t6"
	Oct 18 12:19:19 newest-cni-579606 kubelet[1316]: I1018 12:19:19.645236    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwmkz\" (UniqueName: \"kubernetes.io/projected/915df613-23ce-49e2-b125-d223024077b0-kube-api-access-lwmkz\") pod \"kube-proxy-5hjgn\" (UID: \"915df613-23ce-49e2-b125-d223024077b0\") " pod="kube-system/kube-proxy-5hjgn"
	Oct 18 12:19:19 newest-cni-579606 kubelet[1316]: I1018 12:19:19.645268    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08c0018d-0f0f-435e-8868-31818d5639fa-lib-modules\") pod \"kindnet-2c4t6\" (UID: \"08c0018d-0f0f-435e-8868-31818d5639fa\") " pod="kube-system/kindnet-2c4t6"
	Oct 18 12:19:19 newest-cni-579606 kubelet[1316]: I1018 12:19:19.645299    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/915df613-23ce-49e2-b125-d223024077b0-kube-proxy\") pod \"kube-proxy-5hjgn\" (UID: \"915df613-23ce-49e2-b125-d223024077b0\") " pod="kube-system/kube-proxy-5hjgn"
	Oct 18 12:19:19 newest-cni-579606 kubelet[1316]: I1018 12:19:19.645328    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08c0018d-0f0f-435e-8868-31818d5639fa-xtables-lock\") pod \"kindnet-2c4t6\" (UID: \"08c0018d-0f0f-435e-8868-31818d5639fa\") " pod="kube-system/kindnet-2c4t6"
	Oct 18 12:19:19 newest-cni-579606 kubelet[1316]: I1018 12:19:19.645351    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/915df613-23ce-49e2-b125-d223024077b0-xtables-lock\") pod \"kube-proxy-5hjgn\" (UID: \"915df613-23ce-49e2-b125-d223024077b0\") " pod="kube-system/kube-proxy-5hjgn"
	Oct 18 12:19:19 newest-cni-579606 kubelet[1316]: I1018 12:19:19.645388    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/915df613-23ce-49e2-b125-d223024077b0-lib-modules\") pod \"kube-proxy-5hjgn\" (UID: \"915df613-23ce-49e2-b125-d223024077b0\") " pod="kube-system/kube-proxy-5hjgn"
	Oct 18 12:19:19 newest-cni-579606 kubelet[1316]: E1018 12:19:19.751817    1316 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 18 12:19:19 newest-cni-579606 kubelet[1316]: E1018 12:19:19.751840    1316 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 18 12:19:19 newest-cni-579606 kubelet[1316]: E1018 12:19:19.751851    1316 projected.go:196] Error preparing data for projected volume kube-api-access-lwmkz for pod kube-system/kube-proxy-5hjgn: configmap "kube-root-ca.crt" not found
	Oct 18 12:19:19 newest-cni-579606 kubelet[1316]: E1018 12:19:19.751861    1316 projected.go:196] Error preparing data for projected volume kube-api-access-w5fbr for pod kube-system/kindnet-2c4t6: configmap "kube-root-ca.crt" not found
	Oct 18 12:19:19 newest-cni-579606 kubelet[1316]: E1018 12:19:19.751927    1316 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/915df613-23ce-49e2-b125-d223024077b0-kube-api-access-lwmkz podName:915df613-23ce-49e2-b125-d223024077b0 nodeName:}" failed. No retries permitted until 2025-10-18 12:19:20.251902927 +0000 UTC m=+5.214817935 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lwmkz" (UniqueName: "kubernetes.io/projected/915df613-23ce-49e2-b125-d223024077b0-kube-api-access-lwmkz") pod "kube-proxy-5hjgn" (UID: "915df613-23ce-49e2-b125-d223024077b0") : configmap "kube-root-ca.crt" not found
	Oct 18 12:19:19 newest-cni-579606 kubelet[1316]: E1018 12:19:19.751943    1316 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08c0018d-0f0f-435e-8868-31818d5639fa-kube-api-access-w5fbr podName:08c0018d-0f0f-435e-8868-31818d5639fa nodeName:}" failed. No retries permitted until 2025-10-18 12:19:20.251936359 +0000 UTC m=+5.214851354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w5fbr" (UniqueName: "kubernetes.io/projected/08c0018d-0f0f-435e-8868-31818d5639fa-kube-api-access-w5fbr") pod "kindnet-2c4t6" (UID: "08c0018d-0f0f-435e-8868-31818d5639fa") : configmap "kube-root-ca.crt" not found
	Oct 18 12:19:20 newest-cni-579606 kubelet[1316]: E1018 12:19:20.351630    1316 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 18 12:19:20 newest-cni-579606 kubelet[1316]: E1018 12:19:20.351669    1316 projected.go:196] Error preparing data for projected volume kube-api-access-lwmkz for pod kube-system/kube-proxy-5hjgn: configmap "kube-root-ca.crt" not found
	Oct 18 12:19:20 newest-cni-579606 kubelet[1316]: E1018 12:19:20.351683    1316 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 18 12:19:20 newest-cni-579606 kubelet[1316]: E1018 12:19:20.351713    1316 projected.go:196] Error preparing data for projected volume kube-api-access-w5fbr for pod kube-system/kindnet-2c4t6: configmap "kube-root-ca.crt" not found
	Oct 18 12:19:20 newest-cni-579606 kubelet[1316]: E1018 12:19:20.351768    1316 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/915df613-23ce-49e2-b125-d223024077b0-kube-api-access-lwmkz podName:915df613-23ce-49e2-b125-d223024077b0 nodeName:}" failed. No retries permitted until 2025-10-18 12:19:21.351729802 +0000 UTC m=+6.314644803 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lwmkz" (UniqueName: "kubernetes.io/projected/915df613-23ce-49e2-b125-d223024077b0-kube-api-access-lwmkz") pod "kube-proxy-5hjgn" (UID: "915df613-23ce-49e2-b125-d223024077b0") : configmap "kube-root-ca.crt" not found
	Oct 18 12:19:20 newest-cni-579606 kubelet[1316]: E1018 12:19:20.351790    1316 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08c0018d-0f0f-435e-8868-31818d5639fa-kube-api-access-w5fbr podName:08c0018d-0f0f-435e-8868-31818d5639fa nodeName:}" failed. No retries permitted until 2025-10-18 12:19:21.351780411 +0000 UTC m=+6.314695402 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-w5fbr" (UniqueName: "kubernetes.io/projected/08c0018d-0f0f-435e-8868-31818d5639fa-kube-api-access-w5fbr") pod "kindnet-2c4t6" (UID: "08c0018d-0f0f-435e-8868-31818d5639fa") : configmap "kube-root-ca.crt" not found
	Oct 18 12:19:22 newest-cni-579606 kubelet[1316]: I1018 12:19:22.172152    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2c4t6" podStartSLOduration=3.172062478 podStartE2EDuration="3.172062478s" podCreationTimestamp="2025-10-18 12:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:19:22.171671926 +0000 UTC m=+7.134586929" watchObservedRunningTime="2025-10-18 12:19:22.172062478 +0000 UTC m=+7.134977484"
	Oct 18 12:19:22 newest-cni-579606 kubelet[1316]: I1018 12:19:22.185902    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5hjgn" podStartSLOduration=3.185883089 podStartE2EDuration="3.185883089s" podCreationTimestamp="2025-10-18 12:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:19:22.185689662 +0000 UTC m=+7.148604665" watchObservedRunningTime="2025-10-18 12:19:22.185883089 +0000 UTC m=+7.148798094"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-579606 -n newest-cni-579606
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-579606 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-p6bts storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-579606 describe pod coredns-66bc5c9577-p6bts storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-579606 describe pod coredns-66bc5c9577-p6bts storage-provisioner: exit status 1 (64.066319ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-p6bts" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-579606 describe pod coredns-66bc5c9577-p6bts storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-175371 --alsologtostderr -v=1
E1018 12:19:27.770060    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/auto-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-175371 --alsologtostderr -v=1: exit status 80 (1.719759176s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-175371 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:19:27.661141  332620 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:19:27.661446  332620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:27.661459  332620 out.go:374] Setting ErrFile to fd 2...
	I1018 12:19:27.661465  332620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:27.661839  332620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:19:27.662195  332620 out.go:368] Setting JSON to false
	I1018 12:19:27.662244  332620 mustload.go:65] Loading cluster: embed-certs-175371
	I1018 12:19:27.662722  332620 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:27.663317  332620 cli_runner.go:164] Run: docker container inspect embed-certs-175371 --format={{.State.Status}}
	I1018 12:19:27.682575  332620 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:19:27.682861  332620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:19:27.749274  332620 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:81 SystemTime:2025-10-18 12:19:27.737816461 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:19:27.749923  332620 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-175371 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 12:19:27.752133  332620 out.go:179] * Pausing node embed-certs-175371 ... 
	I1018 12:19:27.753458  332620 host.go:66] Checking if "embed-certs-175371" exists ...
	I1018 12:19:27.753713  332620 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:27.753746  332620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175371
	I1018 12:19:27.774884  332620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/embed-certs-175371/id_rsa Username:docker}
	I1018 12:19:27.870794  332620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:19:27.883422  332620 pause.go:52] kubelet running: true
	I1018 12:19:27.883499  332620 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:19:28.052937  332620 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:19:28.053089  332620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:19:28.126831  332620 cri.go:89] found id: "5617debabda54b03bff0f372472919af6a9bb3bbcbc514242b26a2064697ae59"
	I1018 12:19:28.126857  332620 cri.go:89] found id: "f6306f9162a1d28042bad4e6da438c5462874638b4d0624b07e6465f0c518b7e"
	I1018 12:19:28.126873  332620 cri.go:89] found id: "4fc9ce5175d3764f8e0fb91e099e901a2302dfd2ff50d4abfb0a9edeb71386f9"
	I1018 12:19:28.126879  332620 cri.go:89] found id: "36a5bde68e89db4b5596d0782075e0d814c39bdb4c4812f2188ab8957137475e"
	I1018 12:19:28.126883  332620 cri.go:89] found id: "ef18b0bcad14e848b1c27658083f65d022651b906dddfc0ef264638b57310d83"
	I1018 12:19:28.126888  332620 cri.go:89] found id: "7eed71db702f71ba8ac1b3a4f95bf0e94d637c0237e59764412e0610aff6eddd"
	I1018 12:19:28.126892  332620 cri.go:89] found id: "8b43d4c98eba66467fa5b9aa2bd7f75a53d098d4dc11c9ca9578904769346b5e"
	I1018 12:19:28.126896  332620 cri.go:89] found id: "d82c539cae49915538e61bf60b7ade17e61db3edc660d10570b58552a6175d40"
	I1018 12:19:28.126901  332620 cri.go:89] found id: "a474582c739fed0fe5717b996a3fc2e3a1f0f913711f6e7f996ecc56104a314f"
	I1018 12:19:28.126909  332620 cri.go:89] found id: "a405ad4e1a98a18fc499624c47306f6d1cc7a55bbfa44133264e1b27d5551889"
	I1018 12:19:28.126917  332620 cri.go:89] found id: "cb1a3164b004db279fa65be1382cd2de2087a29d8a9572c7d9390b8435ece780"
	I1018 12:19:28.126921  332620 cri.go:89] found id: ""
	I1018 12:19:28.126965  332620 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:28.140083  332620 retry.go:31] will retry after 331.309818ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:28Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:19:28.471673  332620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:19:28.485062  332620 pause.go:52] kubelet running: false
	I1018 12:19:28.485123  332620 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:19:28.630695  332620 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:19:28.630845  332620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:19:28.703865  332620 cri.go:89] found id: "5617debabda54b03bff0f372472919af6a9bb3bbcbc514242b26a2064697ae59"
	I1018 12:19:28.703894  332620 cri.go:89] found id: "f6306f9162a1d28042bad4e6da438c5462874638b4d0624b07e6465f0c518b7e"
	I1018 12:19:28.703899  332620 cri.go:89] found id: "4fc9ce5175d3764f8e0fb91e099e901a2302dfd2ff50d4abfb0a9edeb71386f9"
	I1018 12:19:28.703903  332620 cri.go:89] found id: "36a5bde68e89db4b5596d0782075e0d814c39bdb4c4812f2188ab8957137475e"
	I1018 12:19:28.703907  332620 cri.go:89] found id: "ef18b0bcad14e848b1c27658083f65d022651b906dddfc0ef264638b57310d83"
	I1018 12:19:28.703912  332620 cri.go:89] found id: "7eed71db702f71ba8ac1b3a4f95bf0e94d637c0237e59764412e0610aff6eddd"
	I1018 12:19:28.703916  332620 cri.go:89] found id: "8b43d4c98eba66467fa5b9aa2bd7f75a53d098d4dc11c9ca9578904769346b5e"
	I1018 12:19:28.703920  332620 cri.go:89] found id: "d82c539cae49915538e61bf60b7ade17e61db3edc660d10570b58552a6175d40"
	I1018 12:19:28.703923  332620 cri.go:89] found id: "a474582c739fed0fe5717b996a3fc2e3a1f0f913711f6e7f996ecc56104a314f"
	I1018 12:19:28.703940  332620 cri.go:89] found id: "a405ad4e1a98a18fc499624c47306f6d1cc7a55bbfa44133264e1b27d5551889"
	I1018 12:19:28.703948  332620 cri.go:89] found id: "cb1a3164b004db279fa65be1382cd2de2087a29d8a9572c7d9390b8435ece780"
	I1018 12:19:28.703953  332620 cri.go:89] found id: ""
	I1018 12:19:28.703999  332620 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:28.716569  332620 retry.go:31] will retry after 377.159356ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:28Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:19:29.093881  332620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:19:29.107119  332620 pause.go:52] kubelet running: false
	I1018 12:19:29.107173  332620 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:19:29.242786  332620 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:19:29.242860  332620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:19:29.306305  332620 cri.go:89] found id: "5617debabda54b03bff0f372472919af6a9bb3bbcbc514242b26a2064697ae59"
	I1018 12:19:29.306332  332620 cri.go:89] found id: "f6306f9162a1d28042bad4e6da438c5462874638b4d0624b07e6465f0c518b7e"
	I1018 12:19:29.306337  332620 cri.go:89] found id: "4fc9ce5175d3764f8e0fb91e099e901a2302dfd2ff50d4abfb0a9edeb71386f9"
	I1018 12:19:29.306340  332620 cri.go:89] found id: "36a5bde68e89db4b5596d0782075e0d814c39bdb4c4812f2188ab8957137475e"
	I1018 12:19:29.306343  332620 cri.go:89] found id: "ef18b0bcad14e848b1c27658083f65d022651b906dddfc0ef264638b57310d83"
	I1018 12:19:29.306346  332620 cri.go:89] found id: "7eed71db702f71ba8ac1b3a4f95bf0e94d637c0237e59764412e0610aff6eddd"
	I1018 12:19:29.306348  332620 cri.go:89] found id: "8b43d4c98eba66467fa5b9aa2bd7f75a53d098d4dc11c9ca9578904769346b5e"
	I1018 12:19:29.306351  332620 cri.go:89] found id: "d82c539cae49915538e61bf60b7ade17e61db3edc660d10570b58552a6175d40"
	I1018 12:19:29.306353  332620 cri.go:89] found id: "a474582c739fed0fe5717b996a3fc2e3a1f0f913711f6e7f996ecc56104a314f"
	I1018 12:19:29.306358  332620 cri.go:89] found id: "a405ad4e1a98a18fc499624c47306f6d1cc7a55bbfa44133264e1b27d5551889"
	I1018 12:19:29.306360  332620 cri.go:89] found id: "cb1a3164b004db279fa65be1382cd2de2087a29d8a9572c7d9390b8435ece780"
	I1018 12:19:29.306363  332620 cri.go:89] found id: ""
	I1018 12:19:29.306398  332620 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:29.320422  332620 out.go:203] 
	W1018 12:19:29.321845  332620 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:19:29.321866  332620 out.go:285] * 
	* 
	W1018 12:19:29.325891  332620 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:19:29.327206  332620 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-175371 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-175371
helpers_test.go:243: (dbg) docker inspect embed-certs-175371:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1",
	        "Created": "2025-10-18T12:16:56.477755693Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 319691,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:18:30.947531585Z",
	            "FinishedAt": "2025-10-18T12:18:30.09328773Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1/hostname",
	        "HostsPath": "/var/lib/docker/containers/62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1/hosts",
	        "LogPath": "/var/lib/docker/containers/62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1/62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1-json.log",
	        "Name": "/embed-certs-175371",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-175371:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-175371",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1",
	                "LowerDir": "/var/lib/docker/overlay2/5e06ef0c32a59fe4b04f9f9b75061096d71e1402dd79ce7cee08e3d509e9b62d-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e06ef0c32a59fe4b04f9f9b75061096d71e1402dd79ce7cee08e3d509e9b62d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e06ef0c32a59fe4b04f9f9b75061096d71e1402dd79ce7cee08e3d509e9b62d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e06ef0c32a59fe4b04f9f9b75061096d71e1402dd79ce7cee08e3d509e9b62d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-175371",
	                "Source": "/var/lib/docker/volumes/embed-certs-175371/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-175371",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-175371",
	                "name.minikube.sigs.k8s.io": "embed-certs-175371",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6fec3315c8af5bfe98464f6647c3daf969a719ab3bf25b319e08603b9bcd0f83",
	            "SandboxKey": "/var/run/docker/netns/6fec3315c8af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-175371": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:73:2c:89:ea:89",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb34d5222966a405cf9b383e8910070a73637f333cd8b420bf2f4d8d0d6f8e0",
	                    "EndpointID": "ba6c3969779f896f7a117457772b255d8ebe76fe55fe84572750db4a4d43d4da",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-175371",
	                        "62e5625dfcf2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175371 -n embed-certs-175371
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175371 -n embed-certs-175371: exit status 2 (304.36633ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-175371 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-175371 logs -n 25: (1.08347493s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-028309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-028309 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-175371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ stop    │ -p embed-certs-175371 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-028309 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-175371 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p embed-certs-175371 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ image   │ no-preload-406541 image list --format=json                                                                                                                                                                                                    │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p no-preload-406541 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ image   │ old-k8s-version-024443 image list --format=json                                                                                                                                                                                               │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p old-k8s-version-024443 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ delete  │ -p no-preload-406541                                                                                                                                                                                                                          │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ delete  │ -p old-k8s-version-024443                                                                                                                                                                                                                     │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ delete  │ -p old-k8s-version-024443                                                                                                                                                                                                                     │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p newest-cni-579606 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ delete  │ -p no-preload-406541                                                                                                                                                                                                                          │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ default-k8s-diff-port-028309 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ pause   │ -p default-k8s-diff-port-028309 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-579606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ stop    │ -p newest-cni-579606 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-028309                                                                                                                                                                                                               │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ image   │ embed-certs-175371 image list --format=json                                                                                                                                                                                                   │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ pause   │ -p embed-certs-175371 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-028309                                                                                                                                                                                                               │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:18:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:18:54.845878  326490 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:18:54.846118  326490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:54.846127  326490 out.go:374] Setting ErrFile to fd 2...
	I1018 12:18:54.846131  326490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:54.846326  326490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:18:54.846865  326490 out.go:368] Setting JSON to false
	I1018 12:18:54.848113  326490 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3683,"bootTime":1760786252,"procs":381,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:18:54.848206  326490 start.go:141] virtualization: kvm guest
	I1018 12:18:54.851418  326490 out.go:179] * [newest-cni-579606] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:18:54.856390  326490 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:18:54.856377  326490 notify.go:220] Checking for updates...
	I1018 12:18:54.857910  326490 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:18:54.859215  326490 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:54.860446  326490 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:18:54.861847  326490 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:18:54.863137  326490 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:18:54.864900  326490 config.go:182] Loaded profile config "default-k8s-diff-port-028309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:54.864984  326490 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:54.865092  326490 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:18:54.888492  326490 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:18:54.888598  326490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:54.953711  326490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 12:18:54.941671438 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:54.953923  326490 docker.go:318] overlay module found
	I1018 12:18:54.958794  326490 out.go:179] * Using the docker driver based on user configuration
	I1018 12:18:54.960013  326490 start.go:305] selected driver: docker
	I1018 12:18:54.960033  326490 start.go:925] validating driver "docker" against <nil>
	I1018 12:18:54.960046  326490 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:18:54.960615  326490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:55.022513  326490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 12:18:55.011731081 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:55.022798  326490 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 12:18:55.022840  326490 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 12:18:55.023141  326490 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 12:18:55.025322  326490 out.go:179] * Using Docker driver with root privileges
	I1018 12:18:55.026401  326490 cni.go:84] Creating CNI manager for ""
	I1018 12:18:55.026484  326490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:18:55.026498  326490 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:18:55.026560  326490 start.go:349] cluster config:
	{Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:55.027938  326490 out.go:179] * Starting "newest-cni-579606" primary control-plane node in "newest-cni-579606" cluster
	I1018 12:18:55.029100  326490 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:18:55.030360  326490 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:18:55.031422  326490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:55.031468  326490 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 12:18:55.031489  326490 cache.go:58] Caching tarball of preloaded images
	I1018 12:18:55.031522  326490 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:18:55.031591  326490 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:18:55.031603  326490 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:18:55.031705  326490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json ...
	I1018 12:18:55.031726  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json: {Name:mk20e362fc30401f09fc034ac5a55088adce3cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:55.053307  326490 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:18:55.053326  326490 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:18:55.053342  326490 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:18:55.053373  326490 start.go:360] acquireMachinesLock for newest-cni-579606: {Name:mk4161cf0bf2eb93a8110dc388332ec9ca8fc5ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:18:55.053467  326490 start.go:364] duration metric: took 78.123µs to acquireMachinesLock for "newest-cni-579606"
	I1018 12:18:55.053489  326490 start.go:93] Provisioning new machine with config: &{Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:18:55.053550  326490 start.go:125] createHost starting for "" (driver="docker")
	W1018 12:18:51.958241  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:18:53.959108  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:18:55.846032  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:58.346225  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:55.055345  326490 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 12:18:55.055547  326490 start.go:159] libmachine.API.Create for "newest-cni-579606" (driver="docker")
	I1018 12:18:55.055575  326490 client.go:168] LocalClient.Create starting
	I1018 12:18:55.055636  326490 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem
	I1018 12:18:55.055669  326490 main.go:141] libmachine: Decoding PEM data...
	I1018 12:18:55.055683  326490 main.go:141] libmachine: Parsing certificate...
	I1018 12:18:55.055736  326490 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem
	I1018 12:18:55.055773  326490 main.go:141] libmachine: Decoding PEM data...
	I1018 12:18:55.055796  326490 main.go:141] libmachine: Parsing certificate...
	I1018 12:18:55.056153  326490 cli_runner.go:164] Run: docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 12:18:55.073803  326490 cli_runner.go:211] docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 12:18:55.073868  326490 network_create.go:284] running [docker network inspect newest-cni-579606] to gather additional debugging logs...
	I1018 12:18:55.073887  326490 cli_runner.go:164] Run: docker network inspect newest-cni-579606
	W1018 12:18:55.092574  326490 cli_runner.go:211] docker network inspect newest-cni-579606 returned with exit code 1
	I1018 12:18:55.092605  326490 network_create.go:287] error running [docker network inspect newest-cni-579606]: docker network inspect newest-cni-579606: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-579606 not found
	I1018 12:18:55.092623  326490 network_create.go:289] output of [docker network inspect newest-cni-579606]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-579606 not found
	
	** /stderr **
	I1018 12:18:55.092788  326490 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:18:55.111259  326490 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1c78aef7d2ee IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:19:5a:10:36:f4} reservation:<nil>}
	I1018 12:18:55.111908  326490 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6069a4ec9777 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:f7:2a:6b:48:b9} reservation:<nil>}
	I1018 12:18:55.112751  326490 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-670e794a7c9f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:d0:78:df:c7:fd} reservation:<nil>}
	I1018 12:18:55.113423  326490 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8bb34d522296 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:fc:1a:65:23:03} reservation:<nil>}
	I1018 12:18:55.114281  326490 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc7b00}
	I1018 12:18:55.114303  326490 network_create.go:124] attempt to create docker network newest-cni-579606 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 12:18:55.114345  326490 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-579606 newest-cni-579606
	I1018 12:18:55.175643  326490 network_create.go:108] docker network newest-cni-579606 192.168.85.0/24 created
	I1018 12:18:55.175691  326490 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-579606" container
	I1018 12:18:55.175752  326490 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 12:18:55.193582  326490 cli_runner.go:164] Run: docker volume create newest-cni-579606 --label name.minikube.sigs.k8s.io=newest-cni-579606 --label created_by.minikube.sigs.k8s.io=true
	I1018 12:18:55.212499  326490 oci.go:103] Successfully created a docker volume newest-cni-579606
	I1018 12:18:55.212595  326490 cli_runner.go:164] Run: docker run --rm --name newest-cni-579606-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-579606 --entrypoint /usr/bin/test -v newest-cni-579606:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 12:18:55.635994  326490 oci.go:107] Successfully prepared a docker volume newest-cni-579606
	I1018 12:18:55.636038  326490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:55.636063  326490 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 12:18:55.636128  326490 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-579606:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 12:18:56.458229  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:18:58.958191  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	I1018 12:19:00.126774  326490 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-579606:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.490575425s)
	I1018 12:19:00.126807  326490 kic.go:203] duration metric: took 4.4907405s to extract preloaded images to volume ...
	W1018 12:19:00.126891  326490 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 12:19:00.126924  326490 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 12:19:00.126991  326490 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 12:19:00.190480  326490 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-579606 --name newest-cni-579606 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-579606 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-579606 --network newest-cni-579606 --ip 192.168.85.2 --volume newest-cni-579606:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 12:19:00.476973  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Running}}
	I1018 12:19:00.495553  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:00.516545  326490 cli_runner.go:164] Run: docker exec newest-cni-579606 stat /var/lib/dpkg/alternatives/iptables
	I1018 12:19:00.562561  326490 oci.go:144] the created container "newest-cni-579606" has a running status.
	I1018 12:19:00.562609  326490 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa...
	I1018 12:19:00.820117  326490 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 12:19:00.854117  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:00.877422  326490 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 12:19:00.877449  326490 kic_runner.go:114] Args: [docker exec --privileged newest-cni-579606 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 12:19:00.925342  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:00.944520  326490 machine.go:93] provisionDockerMachine start ...
	I1018 12:19:00.944616  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:00.964493  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:00.964838  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:00.964858  326490 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:19:01.103775  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-579606
	
	I1018 12:19:01.103807  326490 ubuntu.go:182] provisioning hostname "newest-cni-579606"
	I1018 12:19:01.103880  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.124094  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:01.124376  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:01.124392  326490 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-579606 && echo "newest-cni-579606" | sudo tee /etc/hostname
	I1018 12:19:01.270628  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-579606
	
	I1018 12:19:01.270703  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.289410  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:01.289674  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:01.289696  326490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-579606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-579606/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-579606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:19:01.423556  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:19:01.423583  326490 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:19:01.423603  326490 ubuntu.go:190] setting up certificates
	I1018 12:19:01.423619  326490 provision.go:84] configureAuth start
	I1018 12:19:01.423685  326490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:01.442627  326490 provision.go:143] copyHostCerts
	I1018 12:19:01.442683  326490 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:19:01.442692  326490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:19:01.442779  326490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:19:01.442877  326490 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:19:01.442887  326490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:19:01.442920  326490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:19:01.443028  326490 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:19:01.443058  326490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:19:01.443088  326490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:19:01.443142  326490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.newest-cni-579606 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-579606]
	I1018 12:19:01.605969  326490 provision.go:177] copyRemoteCerts
	I1018 12:19:01.606038  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:19:01.606085  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.625297  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:01.723582  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 12:19:01.744640  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:19:01.763599  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 12:19:01.784423  326490 provision.go:87] duration metric: took 360.788993ms to configureAuth
	I1018 12:19:01.784458  326490 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:19:01.784652  326490 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:01.784752  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.804299  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:01.804508  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:01.804524  326490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:19:02.051413  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:19:02.051436  326490 machine.go:96] duration metric: took 1.106891251s to provisionDockerMachine
	I1018 12:19:02.051444  326490 client.go:171] duration metric: took 6.995862509s to LocalClient.Create
	I1018 12:19:02.051460  326490 start.go:167] duration metric: took 6.995914544s to libmachine.API.Create "newest-cni-579606"
	I1018 12:19:02.051470  326490 start.go:293] postStartSetup for "newest-cni-579606" (driver="docker")
	I1018 12:19:02.051482  326490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:19:02.051542  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:19:02.051582  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.069826  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.169332  326490 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:19:02.173028  326490 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:19:02.173060  326490 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:19:02.173075  326490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:19:02.173131  326490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:19:02.173202  326490 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:19:02.173312  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:19:02.181632  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:19:02.201730  326490 start.go:296] duration metric: took 150.246741ms for postStartSetup
	I1018 12:19:02.202117  326490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:02.220168  326490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json ...
	I1018 12:19:02.220438  326490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:19:02.220477  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.238665  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.333039  326490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:19:02.337804  326490 start.go:128] duration metric: took 7.284234042s to createHost
	I1018 12:19:02.337830  326490 start.go:83] releasing machines lock for "newest-cni-579606", held for 7.284352735s
	I1018 12:19:02.337891  326490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:02.357339  326490 ssh_runner.go:195] Run: cat /version.json
	I1018 12:19:02.357373  326490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:19:02.357386  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.357430  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.376606  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.377490  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.526194  326490 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:02.532929  326490 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:19:02.568991  326490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:19:02.574362  326490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:19:02.574428  326490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:19:02.602949  326490 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 12:19:02.602987  326490 start.go:495] detecting cgroup driver to use...
	I1018 12:19:02.603019  326490 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:19:02.603065  326490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:19:02.619432  326490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:19:02.632985  326490 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:19:02.633047  326490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:19:02.650953  326490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:19:02.670802  326490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:19:02.756116  326490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:19:02.848839  326490 docker.go:234] disabling docker service ...
	I1018 12:19:02.848900  326490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:19:02.868131  326490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:19:02.881575  326490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:19:02.965443  326490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:19:03.051508  326490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:19:03.064380  326490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:19:03.079484  326490 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:19:03.079554  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.090169  326490 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:19:03.090229  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.099749  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.109431  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.118802  326490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:19:03.127410  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.136357  326490 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.151150  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.160956  326490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:19:03.169094  326490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:19:03.177522  326490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:03.257714  326490 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:19:03.374283  326490 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:19:03.374356  326490 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:19:03.378571  326490 start.go:563] Will wait 60s for crictl version
	I1018 12:19:03.378624  326490 ssh_runner.go:195] Run: which crictl
	I1018 12:19:03.382638  326490 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:19:03.406896  326490 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:19:03.406996  326490 ssh_runner.go:195] Run: crio --version
	I1018 12:19:03.436202  326490 ssh_runner.go:195] Run: crio --version
	I1018 12:19:03.466606  326490 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:19:03.468046  326490 cli_runner.go:164] Run: docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:19:03.485613  326490 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 12:19:03.489792  326490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:19:03.502123  326490 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1018 12:19:00.846128  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:19:03.345904  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:19:03.503451  326490 kubeadm.go:883] updating cluster {Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:19:03.503568  326490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:19:03.503623  326490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:19:03.537963  326490 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:19:03.537988  326490 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:19:03.538037  326490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:19:03.564020  326490 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:19:03.564061  326490 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:19:03.564071  326490 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 12:19:03.564172  326490 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-579606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:19:03.564251  326490 ssh_runner.go:195] Run: crio config
	I1018 12:19:03.609404  326490 cni.go:84] Creating CNI manager for ""
	I1018 12:19:03.609430  326490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:19:03.609446  326490 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 12:19:03.609473  326490 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-579606 NodeName:newest-cni-579606 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:19:03.609666  326490 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-579606"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:19:03.609744  326490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:19:03.618201  326490 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:19:03.618283  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:19:03.626679  326490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 12:19:03.639983  326490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:19:03.655953  326490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1018 12:19:03.668846  326490 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:19:03.672666  326490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:19:03.683073  326490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:03.766600  326490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:19:03.797248  326490 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606 for IP: 192.168.85.2
	I1018 12:19:03.797269  326490 certs.go:195] generating shared ca certs ...
	I1018 12:19:03.797296  326490 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:03.797445  326490 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:19:03.797500  326490 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:19:03.797513  326490 certs.go:257] generating profile certs ...
	I1018 12:19:03.797585  326490 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key
	I1018 12:19:03.797609  326490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.crt with IP's: []
	I1018 12:19:04.196975  326490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.crt ...
	I1018 12:19:04.197011  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.crt: {Name:mka42a654d079c2a23058a0f14154e8b79ca5459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.197222  326490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key ...
	I1018 12:19:04.197241  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key: {Name:mk220b04a2afae0bcb10852575c558c1404f1005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.197355  326490 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad
	I1018 12:19:04.197378  326490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 12:19:04.310285  326490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad ...
	I1018 12:19:04.310312  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad: {Name:mke978bbcfe8f1a2cbf3531371f43b4028ef678e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.310509  326490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad ...
	I1018 12:19:04.310528  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad: {Name:mk42b24c0f6b076eda0e07dce8424a94f5271da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.310658  326490 certs.go:382] copying /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad -> /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt
	I1018 12:19:04.310784  326490 certs.go:386] copying /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad -> /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key
	I1018 12:19:04.310873  326490 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key
	I1018 12:19:04.310898  326490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt with IP's: []
	I1018 12:19:04.385339  326490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt ...
	I1018 12:19:04.385370  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt: {Name:mk66f445c5bca9cdd3c55e6ee197ee7cb14dae9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.385567  326490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key ...
	I1018 12:19:04.385584  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key: {Name:mk29fee630df834569bfa6e21a7cc861705c1451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.385849  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem (1338 bytes)
	W1018 12:19:04.385893  326490 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360_empty.pem, impossibly tiny 0 bytes
	I1018 12:19:04.385908  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:19:04.385940  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:19:04.385972  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:19:04.386016  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 12:19:04.386076  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:19:04.386584  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:19:04.405651  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:19:04.423574  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:19:04.441442  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:19:04.460483  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 12:19:04.478325  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:19:04.496004  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:19:04.514077  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:19:04.532154  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem --> /usr/share/ca-certificates/9360.pem (1338 bytes)
	I1018 12:19:04.552898  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /usr/share/ca-certificates/93602.pem (1708 bytes)
	I1018 12:19:04.572871  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:19:04.593879  326490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:19:04.608514  326490 ssh_runner.go:195] Run: openssl version
	I1018 12:19:04.615149  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93602.pem && ln -fs /usr/share/ca-certificates/93602.pem /etc/ssl/certs/93602.pem"
	I1018 12:19:04.624305  326490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93602.pem
	I1018 12:19:04.628375  326490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/93602.pem
	I1018 12:19:04.628425  326490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93602.pem
	I1018 12:19:04.663623  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93602.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:19:04.673411  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:19:04.682605  326490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:04.686974  326490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:04.687061  326490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:04.724063  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:19:04.733543  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9360.pem && ln -fs /usr/share/ca-certificates/9360.pem /etc/ssl/certs/9360.pem"
	I1018 12:19:04.742538  326490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9360.pem
	I1018 12:19:04.746549  326490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9360.pem
	I1018 12:19:04.746601  326490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9360.pem
	I1018 12:19:04.781517  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9360.pem /etc/ssl/certs/51391683.0"
	I1018 12:19:04.791034  326490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:19:04.794955  326490 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:19:04.795012  326490 kubeadm.go:400] StartCluster: {Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:19:04.795092  326490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:04.795154  326490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:04.823284  326490 cri.go:89] found id: ""
	I1018 12:19:04.823356  326490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:19:04.832075  326490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:19:04.840408  326490 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 12:19:04.840478  326490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	W1018 12:19:00.958896  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:03.459593  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:05.845166  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:19:07.344832  317167 pod_ready.go:94] pod "coredns-66bc5c9577-7qgqj" is "Ready"
	I1018 12:19:07.344882  317167 pod_ready.go:86] duration metric: took 37.505154401s for pod "coredns-66bc5c9577-7qgqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.347549  317167 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.351825  317167 pod_ready.go:94] pod "etcd-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:07.351851  317167 pod_ready.go:86] duration metric: took 4.270969ms for pod "etcd-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.353893  317167 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.357781  317167 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:07.357802  317167 pod_ready.go:86] duration metric: took 3.889439ms for pod "kube-apiserver-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.359743  317167 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.543689  317167 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:07.543718  317167 pod_ready.go:86] duration metric: took 183.92899ms for pod "kube-controller-manager-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.742726  317167 pod_ready.go:83] waiting for pod "kube-proxy-bffkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.142748  317167 pod_ready.go:94] pod "kube-proxy-bffkr" is "Ready"
	I1018 12:19:08.142797  317167 pod_ready.go:86] duration metric: took 400.045074ms for pod "kube-proxy-bffkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.343168  317167 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.743587  317167 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:08.743618  317167 pod_ready.go:86] duration metric: took 400.420854ms for pod "kube-scheduler-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.743633  317167 pod_ready.go:40] duration metric: took 38.908363338s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:19:08.790224  317167 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:08.792295  317167 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-028309" cluster and "default" namespace by default
	I1018 12:19:04.849545  326490 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:19:04.849562  326490 kubeadm.go:157] found existing configuration files:
	
	I1018 12:19:04.849600  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:19:04.857827  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:19:04.857889  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:19:04.865939  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:19:04.873915  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:19:04.873983  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:19:04.881861  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:19:04.890019  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:19:04.890088  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:19:04.898082  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:19:04.906181  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:19:04.906236  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:19:04.914044  326490 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 12:19:04.975919  326490 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 12:19:05.037824  326490 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1018 12:19:05.957990  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:07.958857  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:09.958915  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:12.459097  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	I1018 12:19:14.458133  319485 pod_ready.go:94] pod "coredns-66bc5c9577-b6h9l" is "Ready"
	I1018 12:19:14.458159  319485 pod_ready.go:86] duration metric: took 31.505202758s for pod "coredns-66bc5c9577-b6h9l" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.459959  319485 pod_ready.go:83] waiting for pod "etcd-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.463248  319485 pod_ready.go:94] pod "etcd-embed-certs-175371" is "Ready"
	I1018 12:19:14.463270  319485 pod_ready.go:86] duration metric: took 3.284914ms for pod "etcd-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.465089  319485 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.468551  319485 pod_ready.go:94] pod "kube-apiserver-embed-certs-175371" is "Ready"
	I1018 12:19:14.468570  319485 pod_ready.go:86] duration metric: took 3.458555ms for pod "kube-apiserver-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.470303  319485 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.657339  319485 pod_ready.go:94] pod "kube-controller-manager-embed-certs-175371" is "Ready"
	I1018 12:19:14.657367  319485 pod_ready.go:86] duration metric: took 187.044696ms for pod "kube-controller-manager-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.856446  319485 pod_ready.go:83] waiting for pod "kube-proxy-t2x4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.257025  319485 pod_ready.go:94] pod "kube-proxy-t2x4c" is "Ready"
	I1018 12:19:15.257053  319485 pod_ready.go:86] duration metric: took 400.581639ms for pod "kube-proxy-t2x4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.456953  319485 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.893038  326490 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 12:19:15.893090  326490 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 12:19:15.893217  326490 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 12:19:15.893353  326490 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 12:19:15.893498  326490 kubeadm.go:318] OS: Linux
	I1018 12:19:15.893566  326490 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 12:19:15.893627  326490 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 12:19:15.893696  326490 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 12:19:15.893776  326490 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 12:19:15.893850  326490 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 12:19:15.893910  326490 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 12:19:15.893969  326490 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 12:19:15.894035  326490 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 12:19:15.894133  326490 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 12:19:15.894281  326490 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 12:19:15.894412  326490 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 12:19:15.894516  326490 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 12:19:15.896254  326490 out.go:252]   - Generating certificates and keys ...
	I1018 12:19:15.896337  326490 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 12:19:15.896412  326490 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 12:19:15.896489  326490 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 12:19:15.896543  326490 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:19:15.896599  326490 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:19:15.896657  326490 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:19:15.896708  326490 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:19:15.896861  326490 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-579606] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 12:19:15.896916  326490 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:19:15.897021  326490 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-579606] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 12:19:15.897080  326490 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:19:15.897134  326490 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:19:15.897176  326490 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:19:15.897227  326490 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:19:15.897280  326490 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:19:15.897332  326490 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:19:15.897378  326490 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:19:15.897435  326490 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:19:15.897486  326490 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:19:15.897560  326490 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:19:15.897622  326490 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 12:19:15.899813  326490 out.go:252]   - Booting up control plane ...
	I1018 12:19:15.899904  326490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:19:15.899977  326490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:19:15.900053  326490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:19:15.900169  326490 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:19:15.900307  326490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:19:15.900475  326490 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:19:15.900586  326490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:19:15.900647  326490 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:19:15.900835  326490 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:19:15.900980  326490 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:19:15.901059  326490 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501237256s
	I1018 12:19:15.901160  326490 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:19:15.901257  326490 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 12:19:15.901388  326490 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:19:15.901499  326490 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:19:15.901562  326490 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.520322183s
	I1018 12:19:15.901615  326490 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.051874304s
	I1018 12:19:15.901668  326490 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001667177s
	I1018 12:19:15.901817  326490 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:19:15.902084  326490 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:19:15.902160  326490 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:19:15.902393  326490 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-579606 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:19:15.902484  326490 kubeadm.go:318] [bootstrap-token] Using token: pmkr01.67na6m3iuf7b6wke
	I1018 12:19:15.904615  326490 out.go:252]   - Configuring RBAC rules ...
	I1018 12:19:15.904796  326490 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 12:19:15.904875  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 12:19:15.905028  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 12:19:15.905156  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 12:19:15.905290  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 12:19:15.905391  326490 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 12:19:15.905553  326490 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 12:19:15.905613  326490 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 12:19:15.905676  326490 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 12:19:15.905684  326490 kubeadm.go:318] 
	I1018 12:19:15.905730  326490 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 12:19:15.905736  326490 kubeadm.go:318] 
	I1018 12:19:15.905836  326490 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 12:19:15.905852  326490 kubeadm.go:318] 
	I1018 12:19:15.905891  326490 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 12:19:15.905967  326490 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 12:19:15.906032  326490 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 12:19:15.906040  326490 kubeadm.go:318] 
	I1018 12:19:15.906120  326490 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 12:19:15.906130  326490 kubeadm.go:318] 
	I1018 12:19:15.906195  326490 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 12:19:15.906216  326490 kubeadm.go:318] 
	I1018 12:19:15.906289  326490 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 12:19:15.906393  326490 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 12:19:15.906490  326490 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 12:19:15.906500  326490 kubeadm.go:318] 
	I1018 12:19:15.906596  326490 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 12:19:15.906826  326490 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 12:19:15.906844  326490 kubeadm.go:318] 
	I1018 12:19:15.906936  326490 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token pmkr01.67na6m3iuf7b6wke \
	I1018 12:19:15.907119  326490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d \
	I1018 12:19:15.907164  326490 kubeadm.go:318] 	--control-plane 
	I1018 12:19:15.907173  326490 kubeadm.go:318] 
	I1018 12:19:15.907323  326490 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 12:19:15.907337  326490 kubeadm.go:318] 
	I1018 12:19:15.907436  326490 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token pmkr01.67na6m3iuf7b6wke \
	I1018 12:19:15.907606  326490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d 
	I1018 12:19:15.907623  326490 cni.go:84] Creating CNI manager for ""
	I1018 12:19:15.907632  326490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:19:15.857063  319485 pod_ready.go:94] pod "kube-scheduler-embed-certs-175371" is "Ready"
	I1018 12:19:15.857091  319485 pod_ready.go:86] duration metric: took 400.110605ms for pod "kube-scheduler-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.857103  319485 pod_ready.go:40] duration metric: took 32.907623738s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:19:15.908233  319485 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:15.909420  326490 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 12:19:15.910368  319485 out.go:179] * Done! kubectl is now configured to use "embed-certs-175371" cluster and "default" namespace by default
	I1018 12:19:15.911428  326490 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 12:19:15.916203  326490 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 12:19:15.916223  326490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 12:19:15.930716  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 12:19:16.186811  326490 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:19:16.186877  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:16.186927  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-579606 minikube.k8s.io/updated_at=2025_10_18T12_19_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=newest-cni-579606 minikube.k8s.io/primary=true
	I1018 12:19:16.200483  326490 ops.go:34] apiserver oom_adj: -16
	I1018 12:19:16.289962  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:16.790297  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:17.290815  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:17.790675  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:18.290971  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:18.791051  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:19.291007  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:19.790041  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:20.290948  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:20.364194  326490 kubeadm.go:1113] duration metric: took 4.177366872s to wait for elevateKubeSystemPrivileges
	I1018 12:19:20.364236  326490 kubeadm.go:402] duration metric: took 15.569226889s to StartCluster
	I1018 12:19:20.364257  326490 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:20.364341  326490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:19:20.366539  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:20.366808  326490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 12:19:20.366823  326490 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:19:20.366886  326490 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:19:20.366978  326490 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-579606"
	I1018 12:19:20.366998  326490 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-579606"
	I1018 12:19:20.367029  326490 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:20.367046  326490 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:20.367047  326490 addons.go:69] Setting default-storageclass=true in profile "newest-cni-579606"
	I1018 12:19:20.367088  326490 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-579606"
	I1018 12:19:20.367465  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:20.367552  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:20.368575  326490 out.go:179] * Verifying Kubernetes components...
	I1018 12:19:20.370326  326490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:20.394477  326490 addons.go:238] Setting addon default-storageclass=true in "newest-cni-579606"
	I1018 12:19:20.394522  326490 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:20.394869  326490 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:19:20.395017  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:20.396676  326490 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:19:20.396702  326490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:19:20.396772  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:20.423305  326490 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:19:20.423405  326490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:19:20.423499  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:20.423817  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:20.453744  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:20.465106  326490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 12:19:20.532388  326490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:19:20.546306  326490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:19:20.568683  326490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:19:20.669063  326490 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 12:19:20.670556  326490 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:19:20.670609  326490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:19:20.899558  326490 api_server.go:72] duration metric: took 532.701277ms to wait for apiserver process to appear ...
	I1018 12:19:20.899596  326490 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:19:20.899623  326490 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:20.906703  326490 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 12:19:20.907612  326490 api_server.go:141] control plane version: v1.34.1
	I1018 12:19:20.907641  326490 api_server.go:131] duration metric: took 8.037799ms to wait for apiserver health ...
	I1018 12:19:20.907652  326490 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:19:20.909941  326490 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 12:19:20.911175  326490 addons.go:514] duration metric: took 544.288646ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 12:19:20.911194  326490 system_pods.go:59] 8 kube-system pods found
	I1018 12:19:20.911217  326490 system_pods.go:61] "coredns-66bc5c9577-p6bts" [49609244-6dc2-4950-8fad-8240b827ecca] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 12:19:20.911224  326490 system_pods.go:61] "etcd-newest-cni-579606" [496c00b4-7ad1-40c0-a440-c396a752cbf4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:19:20.911231  326490 system_pods.go:61] "kindnet-2c4t6" [08c0018d-0f0f-435e-8868-31818d5639fa] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:19:20.911238  326490 system_pods.go:61] "kube-apiserver-newest-cni-579606" [a39961c7-019e-41ec-8843-e98e9c2e3604] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:19:20.911249  326490 system_pods.go:61] "kube-controller-manager-newest-cni-579606" [992bd82d-6489-43da-83ba-8dcb6b86fe48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:19:20.911262  326490 system_pods.go:61] "kube-proxy-5hjgn" [915df613-23ce-49e2-b125-d223024077b0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:19:20.911291  326490 system_pods.go:61] "kube-scheduler-newest-cni-579606" [2a1de39e-4fa6-49e8-a420-75a6c82ac73e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:19:20.911306  326490 system_pods.go:61] "storage-provisioner" [c7ff4c04-56e5-469b-9af2-dc1bf4fe969d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 12:19:20.911314  326490 system_pods.go:74] duration metric: took 3.655766ms to wait for pod list to return data ...
	I1018 12:19:20.911324  326490 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:19:20.913681  326490 default_sa.go:45] found service account: "default"
	I1018 12:19:20.913702  326490 default_sa.go:55] duration metric: took 2.371901ms for default service account to be created ...
	I1018 12:19:20.913712  326490 kubeadm.go:586] duration metric: took 546.861004ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 12:19:20.913730  326490 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:19:20.916084  326490 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:19:20.916105  326490 node_conditions.go:123] node cpu capacity is 8
	I1018 12:19:20.916117  326490 node_conditions.go:105] duration metric: took 2.382506ms to run NodePressure ...
	I1018 12:19:20.916128  326490 start.go:241] waiting for startup goroutines ...
	I1018 12:19:21.173827  326490 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-579606" context rescaled to 1 replicas
	I1018 12:19:21.173870  326490 start.go:246] waiting for cluster config update ...
	I1018 12:19:21.173882  326490 start.go:255] writing updated cluster config ...
	I1018 12:19:21.174193  326490 ssh_runner.go:195] Run: rm -f paused
	I1018 12:19:21.223166  326490 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:21.225317  326490 out.go:179] * Done! kubectl is now configured to use "newest-cni-579606" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 12:18:52 embed-certs-175371 crio[563]: time="2025-10-18T12:18:52.83581025Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 12:18:52 embed-certs-175371 crio[563]: time="2025-10-18T12:18:52.841064206Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 12:18:52 embed-certs-175371 crio[563]: time="2025-10-18T12:18:52.841099677Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 12:19:08 embed-certs-175371 crio[563]: time="2025-10-18T12:19:08.971527464Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=26476176-3a62-42b3-8229-a6220e267d02 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:08 embed-certs-175371 crio[563]: time="2025-10-18T12:19:08.972370076Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=67eafd4c-6e74-455c-90d5-489c3fe4e746 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:08 embed-certs-175371 crio[563]: time="2025-10-18T12:19:08.973383703Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp/dashboard-metrics-scraper" id=a79e4de2-7321-4913-a72e-839ca1577dc7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:08 embed-certs-175371 crio[563]: time="2025-10-18T12:19:08.9736505Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:08 embed-certs-175371 crio[563]: time="2025-10-18T12:19:08.979524297Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:08 embed-certs-175371 crio[563]: time="2025-10-18T12:19:08.9801566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:09 embed-certs-175371 crio[563]: time="2025-10-18T12:19:09.015903294Z" level=info msg="Created container a405ad4e1a98a18fc499624c47306f6d1cc7a55bbfa44133264e1b27d5551889: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp/dashboard-metrics-scraper" id=a79e4de2-7321-4913-a72e-839ca1577dc7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:09 embed-certs-175371 crio[563]: time="2025-10-18T12:19:09.016502614Z" level=info msg="Starting container: a405ad4e1a98a18fc499624c47306f6d1cc7a55bbfa44133264e1b27d5551889" id=b13f7d98-e8c1-4727-ac49-75fdf3732d8b name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:19:09 embed-certs-175371 crio[563]: time="2025-10-18T12:19:09.018465646Z" level=info msg="Started container" PID=1757 containerID=a405ad4e1a98a18fc499624c47306f6d1cc7a55bbfa44133264e1b27d5551889 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp/dashboard-metrics-scraper id=b13f7d98-e8c1-4727-ac49-75fdf3732d8b name=/runtime.v1.RuntimeService/StartContainer sandboxID=2ff71eac7916d9257d2f13c089cac003c220048e18ea9eef187c68409dc9a69a
	Oct 18 12:19:09 embed-certs-175371 crio[563]: time="2025-10-18T12:19:09.089271029Z" level=info msg="Removing container: 9f9b17ff004c953db0bb0dbb859d0cc12c3e095d59cd5ee238a91807668dc4bb" id=0f7015a0-a0ea-458b-bde0-9cd97bc7ccf0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:19:09 embed-certs-175371 crio[563]: time="2025-10-18T12:19:09.099916687Z" level=info msg="Removed container 9f9b17ff004c953db0bb0dbb859d0cc12c3e095d59cd5ee238a91807668dc4bb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp/dashboard-metrics-scraper" id=0f7015a0-a0ea-458b-bde0-9cd97bc7ccf0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.096358873Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9ee81e05-cf9f-42f4-9214-9731df8c46c8 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.09736587Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f92fe0a1-9104-47d1-9429-9b6131cfdedc name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.098470276Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a28a4310-d4d2-45da-834c-caa96eca0d52 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.098740271Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.103222352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.103423555Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/144b12b37946c45001c97e144b72befff90afcada575307e35051e2228472cee/merged/etc/passwd: no such file or directory"
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.103461831Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/144b12b37946c45001c97e144b72befff90afcada575307e35051e2228472cee/merged/etc/group: no such file or directory"
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.103740363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.129124977Z" level=info msg="Created container 5617debabda54b03bff0f372472919af6a9bb3bbcbc514242b26a2064697ae59: kube-system/storage-provisioner/storage-provisioner" id=a28a4310-d4d2-45da-834c-caa96eca0d52 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.129813492Z" level=info msg="Starting container: 5617debabda54b03bff0f372472919af6a9bb3bbcbc514242b26a2064697ae59" id=9bca1842-7053-4219-9a80-b77fa0488ab5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.13182002Z" level=info msg="Started container" PID=1771 containerID=5617debabda54b03bff0f372472919af6a9bb3bbcbc514242b26a2064697ae59 description=kube-system/storage-provisioner/storage-provisioner id=9bca1842-7053-4219-9a80-b77fa0488ab5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=18feedd3d7c26e7a2eff27f48d91e337915e0f785c90e299345c24a3ea528fed
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5617debabda54       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   18feedd3d7c26       storage-provisioner                          kube-system
	a405ad4e1a98a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   2ff71eac7916d       dashboard-metrics-scraper-6ffb444bf9-24czp   kubernetes-dashboard
	cb1a3164b004d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   eb7ea3ab23330       kubernetes-dashboard-855c9754f9-z4wqj        kubernetes-dashboard
	81b540825c9eb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           47 seconds ago      Running             busybox                     1                   cb308e2134534       busybox                                      default
	f6306f9162a1d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           47 seconds ago      Running             coredns                     0                   09269391a70af       coredns-66bc5c9577-b6h9l                     kube-system
	4fc9ce5175d37       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           47 seconds ago      Running             kube-proxy                  0                   d825774c10f73       kube-proxy-t2x4c                             kube-system
	36a5bde68e89d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           47 seconds ago      Running             kindnet-cni                 0                   4ac436233cd3e       kindnet-dxw8r                                kube-system
	ef18b0bcad14e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Exited              storage-provisioner         0                   18feedd3d7c26       storage-provisioner                          kube-system
	7eed71db702f7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           50 seconds ago      Running             etcd                        0                   1dca7b19b01ff       etcd-embed-certs-175371                      kube-system
	8b43d4c98eba6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           50 seconds ago      Running             kube-apiserver              0                   42a4e0109b4ba       kube-apiserver-embed-certs-175371            kube-system
	d82c539cae499       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           50 seconds ago      Running             kube-scheduler              0                   be01ebffb564c       kube-scheduler-embed-certs-175371            kube-system
	a474582c739fe       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           50 seconds ago      Running             kube-controller-manager     0                   3e5898b103599       kube-controller-manager-embed-certs-175371   kube-system
	
	
	==> coredns [f6306f9162a1d28042bad4e6da438c5462874638b4d0624b07e6465f0c518b7e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54496 - 19579 "HINFO IN 390884335358352546.2896067784334696330. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.030583319s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-175371
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-175371
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=embed-certs-175371
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_17_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:17:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-175371
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:19:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:19:12 +0000   Sat, 18 Oct 2025 12:17:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:19:12 +0000   Sat, 18 Oct 2025 12:17:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:19:12 +0000   Sat, 18 Oct 2025 12:17:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:19:12 +0000   Sat, 18 Oct 2025 12:17:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-175371
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                d2c06e1f-4c4f-4264-8151-34f2c71eddce
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-b6h9l                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m12s
	  kube-system                 etcd-embed-certs-175371                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m18s
	  kube-system                 kindnet-dxw8r                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m13s
	  kube-system                 kube-apiserver-embed-certs-175371             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-controller-manager-embed-certs-175371    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-proxy-t2x4c                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-scheduler-embed-certs-175371             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-24czp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-z4wqj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m11s                  kube-proxy       
	  Normal  Starting                 47s                    kube-proxy       
	  Normal  Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m23s (x8 over 2m23s)  kubelet          Node embed-certs-175371 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m23s (x8 over 2m23s)  kubelet          Node embed-certs-175371 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m23s (x8 over 2m23s)  kubelet          Node embed-certs-175371 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m18s                  kubelet          Node embed-certs-175371 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m18s                  kubelet          Node embed-certs-175371 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m18s                  kubelet          Node embed-certs-175371 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m18s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m14s                  node-controller  Node embed-certs-175371 event: Registered Node embed-certs-175371 in Controller
	  Normal  NodeReady                92s                    kubelet          Node embed-certs-175371 status is now: NodeReady
	  Normal  Starting                 52s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s (x8 over 52s)      kubelet          Node embed-certs-175371 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 52s)      kubelet          Node embed-certs-175371 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x8 over 52s)      kubelet          Node embed-certs-175371 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                    node-controller  Node embed-certs-175371 event: Registered Node embed-certs-175371 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +11.948953] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 93 07 de 40 6d 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 2f a5 3a 37 fc 08 06
	[  +0.204454] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 4b 47 1f ce e5 08 06
	[Oct18 12:16] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 88 62 1b dd a7 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 f1 aa 42 b3 1d 08 06
	[  +0.000901] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +26.035563] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 9e 15 3f 0e e1 08 06
	[  +0.000631] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 55 46 ae a1 7f 08 06
	[  +2.492998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 63 10 7e 7b f1 08 06
	[  +0.001695] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	[ +18.118461] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e eb 77 72 c6 18 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	
	
	==> etcd [7eed71db702f71ba8ac1b3a4f95bf0e94d637c0237e59764412e0610aff6eddd] <==
	{"level":"warn","ts":"2025-10-18T12:18:40.722571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.729260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.735578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.745131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.752729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.759099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.766088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.783955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.792718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.800080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.806892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.814308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.821756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.828334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.835429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.842239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.856900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.865140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.880650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.886959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.894332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.911319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.918001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.924553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.970182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42696","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:19:30 up  1:01,  0 user,  load average: 3.11, 3.83, 2.60
	Linux embed-certs-175371 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [36a5bde68e89db4b5596d0782075e0d814c39bdb4c4812f2188ab8957137475e] <==
	I1018 12:18:42.516931       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:18:42.517687       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 12:18:42.517913       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:18:42.517936       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:18:42.517959       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:18:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:18:42.812796       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:18:42.813697       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:18:42.813721       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:18:42.813898       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:18:43.114172       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:18:43.114195       1 metrics.go:72] Registering metrics
	I1018 12:18:43.114242       1 controller.go:711] "Syncing nftables rules"
	I1018 12:18:52.813029       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 12:18:52.813085       1 main.go:301] handling current node
	I1018 12:19:02.816855       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 12:19:02.816885       1 main.go:301] handling current node
	I1018 12:19:12.811954       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 12:19:12.811991       1 main.go:301] handling current node
	I1018 12:19:22.818875       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 12:19:22.818920       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8b43d4c98eba66467fa5b9aa2bd7f75a53d098d4dc11c9ca9578904769346b5e] <==
	I1018 12:18:41.451393       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 12:18:41.451401       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 12:18:41.451439       1 aggregator.go:171] initial CRD sync complete...
	I1018 12:18:41.451448       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 12:18:41.451454       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 12:18:41.451460       1 cache.go:39] Caches are synced for autoregister controller
	I1018 12:18:41.451544       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 12:18:41.451678       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 12:18:41.454439       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 12:18:41.457470       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 12:18:41.470571       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 12:18:41.482010       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:18:41.493107       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:18:41.530311       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 12:18:41.703722       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:18:41.735780       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:18:41.758441       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:18:41.767620       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:18:41.777682       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:18:41.813438       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.94.86"}
	I1018 12:18:41.826162       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.211.155"}
	I1018 12:18:42.358197       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:18:45.136249       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:18:45.231410       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:18:45.383497       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a474582c739fed0fe5717b996a3fc2e3a1f0f913711f6e7f996ecc56104a314f] <==
	I1018 12:18:44.757405       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 12:18:44.757487       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:18:44.758091       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:18:44.779663       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 12:18:44.779686       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 12:18:44.779675       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 12:18:44.779861       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 12:18:44.779916       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 12:18:44.780912       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 12:18:44.780937       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 12:18:44.781001       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 12:18:44.781558       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 12:18:44.782815       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 12:18:44.784183       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 12:18:44.784327       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:18:44.786362       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 12:18:44.786404       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 12:18:44.786433       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 12:18:44.786487       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 12:18:44.786493       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 12:18:44.786498       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 12:18:44.788081       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:18:44.790324       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 12:18:44.792597       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 12:18:44.802922       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4fc9ce5175d3764f8e0fb91e099e901a2302dfd2ff50d4abfb0a9edeb71386f9] <==
	I1018 12:18:42.376048       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:18:42.438173       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:18:42.538657       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:18:42.538710       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 12:18:42.538808       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:18:42.561745       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:18:42.561820       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:18:42.568657       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:18:42.569231       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:18:42.569254       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:18:42.570622       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:18:42.570650       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:18:42.570684       1 config.go:200] "Starting service config controller"
	I1018 12:18:42.570729       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:18:42.570728       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:18:42.570745       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:18:42.570989       1 config.go:309] "Starting node config controller"
	I1018 12:18:42.571003       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:18:42.671520       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:18:42.671555       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:18:42.671529       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:18:42.671582       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [d82c539cae49915538e61bf60b7ade17e61db3edc660d10570b58552a6175d40] <==
	I1018 12:18:41.414640       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 12:18:41.414679       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:18:41.418106       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:18:41.418145       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:18:41.418233       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:18:41.418381       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:18:41.431162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 12:18:41.434890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:18:41.435055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:18:41.436145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:18:41.436254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:18:41.436367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:18:41.436447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:18:41.437128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:18:41.436582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:18:41.436642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:18:41.436811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:18:41.436985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:18:41.437056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:18:41.436520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:18:41.437217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:18:41.437441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:18:41.437550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:18:41.438047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1018 12:18:42.319397       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:18:45 embed-certs-175371 kubelet[723]: I1018 12:18:45.301508     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z9nh\" (UniqueName: \"kubernetes.io/projected/a954deab-5a8a-4354-9e53-7ac4c92d040f-kube-api-access-4z9nh\") pod \"dashboard-metrics-scraper-6ffb444bf9-24czp\" (UID: \"a954deab-5a8a-4354-9e53-7ac4c92d040f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp"
	Oct 18 12:18:45 embed-certs-175371 kubelet[723]: I1018 12:18:45.301526     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a954deab-5a8a-4354-9e53-7ac4c92d040f-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-24czp\" (UID: \"a954deab-5a8a-4354-9e53-7ac4c92d040f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp"
	Oct 18 12:18:45 embed-certs-175371 kubelet[723]: I1018 12:18:45.301547     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9162a212-7249-4ae3-a9ee-877a66ae4adf-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-z4wqj\" (UID: \"9162a212-7249-4ae3-a9ee-877a66ae4adf\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z4wqj"
	Oct 18 12:18:49 embed-certs-175371 kubelet[723]: I1018 12:18:49.023695     723 scope.go:117] "RemoveContainer" containerID="e2d68f39dd5ab27c50cfd823b70df7f3b6aed834bd32c61c6da1199a2135cc4c"
	Oct 18 12:18:50 embed-certs-175371 kubelet[723]: I1018 12:18:50.029603     723 scope.go:117] "RemoveContainer" containerID="e2d68f39dd5ab27c50cfd823b70df7f3b6aed834bd32c61c6da1199a2135cc4c"
	Oct 18 12:18:50 embed-certs-175371 kubelet[723]: I1018 12:18:50.030701     723 scope.go:117] "RemoveContainer" containerID="9f9b17ff004c953db0bb0dbb859d0cc12c3e095d59cd5ee238a91807668dc4bb"
	Oct 18 12:18:50 embed-certs-175371 kubelet[723]: E1018 12:18:50.031376     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-24czp_kubernetes-dashboard(a954deab-5a8a-4354-9e53-7ac4c92d040f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp" podUID="a954deab-5a8a-4354-9e53-7ac4c92d040f"
	Oct 18 12:18:51 embed-certs-175371 kubelet[723]: I1018 12:18:51.032436     723 scope.go:117] "RemoveContainer" containerID="9f9b17ff004c953db0bb0dbb859d0cc12c3e095d59cd5ee238a91807668dc4bb"
	Oct 18 12:18:51 embed-certs-175371 kubelet[723]: E1018 12:18:51.032609     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-24czp_kubernetes-dashboard(a954deab-5a8a-4354-9e53-7ac4c92d040f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp" podUID="a954deab-5a8a-4354-9e53-7ac4c92d040f"
	Oct 18 12:18:54 embed-certs-175371 kubelet[723]: I1018 12:18:54.559918     723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z4wqj" podStartSLOduration=3.816114155 podStartE2EDuration="9.559890666s" podCreationTimestamp="2025-10-18 12:18:45 +0000 UTC" firstStartedPulling="2025-10-18 12:18:45.535653359 +0000 UTC m=+6.657332094" lastFinishedPulling="2025-10-18 12:18:51.279429856 +0000 UTC m=+12.401108605" observedRunningTime="2025-10-18 12:18:52.046564184 +0000 UTC m=+13.168242958" watchObservedRunningTime="2025-10-18 12:18:54.559890666 +0000 UTC m=+15.681569422"
	Oct 18 12:18:55 embed-certs-175371 kubelet[723]: I1018 12:18:55.088342     723 scope.go:117] "RemoveContainer" containerID="9f9b17ff004c953db0bb0dbb859d0cc12c3e095d59cd5ee238a91807668dc4bb"
	Oct 18 12:18:55 embed-certs-175371 kubelet[723]: E1018 12:18:55.088570     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-24czp_kubernetes-dashboard(a954deab-5a8a-4354-9e53-7ac4c92d040f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp" podUID="a954deab-5a8a-4354-9e53-7ac4c92d040f"
	Oct 18 12:19:08 embed-certs-175371 kubelet[723]: I1018 12:19:08.971136     723 scope.go:117] "RemoveContainer" containerID="9f9b17ff004c953db0bb0dbb859d0cc12c3e095d59cd5ee238a91807668dc4bb"
	Oct 18 12:19:09 embed-certs-175371 kubelet[723]: I1018 12:19:09.083607     723 scope.go:117] "RemoveContainer" containerID="9f9b17ff004c953db0bb0dbb859d0cc12c3e095d59cd5ee238a91807668dc4bb"
	Oct 18 12:19:09 embed-certs-175371 kubelet[723]: I1018 12:19:09.083974     723 scope.go:117] "RemoveContainer" containerID="a405ad4e1a98a18fc499624c47306f6d1cc7a55bbfa44133264e1b27d5551889"
	Oct 18 12:19:09 embed-certs-175371 kubelet[723]: E1018 12:19:09.084344     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-24czp_kubernetes-dashboard(a954deab-5a8a-4354-9e53-7ac4c92d040f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp" podUID="a954deab-5a8a-4354-9e53-7ac4c92d040f"
	Oct 18 12:19:13 embed-certs-175371 kubelet[723]: I1018 12:19:13.095872     723 scope.go:117] "RemoveContainer" containerID="ef18b0bcad14e848b1c27658083f65d022651b906dddfc0ef264638b57310d83"
	Oct 18 12:19:15 embed-certs-175371 kubelet[723]: I1018 12:19:15.089282     723 scope.go:117] "RemoveContainer" containerID="a405ad4e1a98a18fc499624c47306f6d1cc7a55bbfa44133264e1b27d5551889"
	Oct 18 12:19:15 embed-certs-175371 kubelet[723]: E1018 12:19:15.089504     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-24czp_kubernetes-dashboard(a954deab-5a8a-4354-9e53-7ac4c92d040f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp" podUID="a954deab-5a8a-4354-9e53-7ac4c92d040f"
	Oct 18 12:19:26 embed-certs-175371 kubelet[723]: I1018 12:19:26.970952     723 scope.go:117] "RemoveContainer" containerID="a405ad4e1a98a18fc499624c47306f6d1cc7a55bbfa44133264e1b27d5551889"
	Oct 18 12:19:26 embed-certs-175371 kubelet[723]: E1018 12:19:26.971196     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-24czp_kubernetes-dashboard(a954deab-5a8a-4354-9e53-7ac4c92d040f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp" podUID="a954deab-5a8a-4354-9e53-7ac4c92d040f"
	Oct 18 12:19:28 embed-certs-175371 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 12:19:28 embed-certs-175371 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 12:19:28 embed-certs-175371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 12:19:28 embed-certs-175371 systemd[1]: kubelet.service: Consumed 1.653s CPU time.
	
	
	==> kubernetes-dashboard [cb1a3164b004db279fa65be1382cd2de2087a29d8a9572c7d9390b8435ece780] <==
	2025/10/18 12:18:51 Starting overwatch
	2025/10/18 12:18:51 Using namespace: kubernetes-dashboard
	2025/10/18 12:18:51 Using in-cluster config to connect to apiserver
	2025/10/18 12:18:51 Using secret token for csrf signing
	2025/10/18 12:18:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 12:18:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 12:18:51 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 12:18:51 Generating JWE encryption key
	2025/10/18 12:18:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 12:18:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 12:18:51 Initializing JWE encryption key from synchronized object
	2025/10/18 12:18:51 Creating in-cluster Sidecar client
	2025/10/18 12:18:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 12:18:51 Serving insecurely on HTTP port: 9090
	2025/10/18 12:19:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5617debabda54b03bff0f372472919af6a9bb3bbcbc514242b26a2064697ae59] <==
	I1018 12:19:13.144449       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 12:19:13.153615       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 12:19:13.153676       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 12:19:13.155935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:16.610476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:20.874272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:24.473048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:27.526882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:30.548943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:30.553781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:19:30.553974       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 12:19:30.554115       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5075b3f2-7e93-4c37-98dd-c9faa2e4aa50", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-175371_9e8dd8a0-c67c-4765-8889-3b4c8f207b6f became leader
	I1018 12:19:30.554161       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-175371_9e8dd8a0-c67c-4765-8889-3b4c8f207b6f!
	W1018 12:19:30.555966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:30.558837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:19:30.655189       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-175371_9e8dd8a0-c67c-4765-8889-3b4c8f207b6f!
	
	
	==> storage-provisioner [ef18b0bcad14e848b1c27658083f65d022651b906dddfc0ef264638b57310d83] <==
	I1018 12:18:42.335970       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 12:19:12.338133       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-175371 -n embed-certs-175371
E1018 12:19:31.049470    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/kindnet-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:31.055918    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/kindnet-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:31.067318    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/kindnet-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-175371 -n embed-certs-175371: exit status 2 (302.006742ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-175371 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E1018 12:19:31.089719    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/kindnet-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-175371
E1018 12:19:31.131558    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/kindnet-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:243: (dbg) docker inspect embed-certs-175371:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1",
	        "Created": "2025-10-18T12:16:56.477755693Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 319691,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:18:30.947531585Z",
	            "FinishedAt": "2025-10-18T12:18:30.09328773Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1/hostname",
	        "HostsPath": "/var/lib/docker/containers/62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1/hosts",
	        "LogPath": "/var/lib/docker/containers/62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1/62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1-json.log",
	        "Name": "/embed-certs-175371",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-175371:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-175371",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "62e5625dfcf21e77faae50fbe63819a87dcea6ccd7f614ab26d5be607743fbe1",
	                "LowerDir": "/var/lib/docker/overlay2/5e06ef0c32a59fe4b04f9f9b75061096d71e1402dd79ce7cee08e3d509e9b62d-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e06ef0c32a59fe4b04f9f9b75061096d71e1402dd79ce7cee08e3d509e9b62d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e06ef0c32a59fe4b04f9f9b75061096d71e1402dd79ce7cee08e3d509e9b62d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e06ef0c32a59fe4b04f9f9b75061096d71e1402dd79ce7cee08e3d509e9b62d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-175371",
	                "Source": "/var/lib/docker/volumes/embed-certs-175371/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-175371",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-175371",
	                "name.minikube.sigs.k8s.io": "embed-certs-175371",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6fec3315c8af5bfe98464f6647c3daf969a719ab3bf25b319e08603b9bcd0f83",
	            "SandboxKey": "/var/run/docker/netns/6fec3315c8af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-175371": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:73:2c:89:ea:89",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb34d5222966a405cf9b383e8910070a73637f333cd8b420bf2f4d8d0d6f8e0",
	                    "EndpointID": "ba6c3969779f896f7a117457772b255d8ebe76fe55fe84572750db4a4d43d4da",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-175371",
	                        "62e5625dfcf2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175371 -n embed-certs-175371
E1018 12:19:31.213212    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/kindnet-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:31.374897    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/kindnet-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175371 -n embed-certs-175371: exit status 2 (327.149286ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-175371 logs -n 25
E1018 12:19:31.696885    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/kindnet-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:32.338698    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/kindnet-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-175371 logs -n 25: (1.068149064s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-028309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-028309 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-175371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ stop    │ -p embed-certs-175371 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-028309 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-175371 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p embed-certs-175371 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ image   │ no-preload-406541 image list --format=json                                                                                                                                                                                                    │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p no-preload-406541 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ image   │ old-k8s-version-024443 image list --format=json                                                                                                                                                                                               │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p old-k8s-version-024443 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ delete  │ -p no-preload-406541                                                                                                                                                                                                                          │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ delete  │ -p old-k8s-version-024443                                                                                                                                                                                                                     │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ delete  │ -p old-k8s-version-024443                                                                                                                                                                                                                     │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p newest-cni-579606 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ delete  │ -p no-preload-406541                                                                                                                                                                                                                          │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ default-k8s-diff-port-028309 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ pause   │ -p default-k8s-diff-port-028309 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-579606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ stop    │ -p newest-cni-579606 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-028309                                                                                                                                                                                                               │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ image   │ embed-certs-175371 image list --format=json                                                                                                                                                                                                   │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ pause   │ -p embed-certs-175371 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-028309                                                                                                                                                                                                               │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:18:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:18:54.845878  326490 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:18:54.846118  326490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:54.846127  326490 out.go:374] Setting ErrFile to fd 2...
	I1018 12:18:54.846131  326490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:54.846326  326490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:18:54.846865  326490 out.go:368] Setting JSON to false
	I1018 12:18:54.848113  326490 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3683,"bootTime":1760786252,"procs":381,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:18:54.848206  326490 start.go:141] virtualization: kvm guest
	I1018 12:18:54.851418  326490 out.go:179] * [newest-cni-579606] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:18:54.856390  326490 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:18:54.856377  326490 notify.go:220] Checking for updates...
	I1018 12:18:54.857910  326490 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:18:54.859215  326490 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:18:54.860446  326490 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:18:54.861847  326490 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:18:54.863137  326490 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:18:54.864900  326490 config.go:182] Loaded profile config "default-k8s-diff-port-028309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:54.864984  326490 config.go:182] Loaded profile config "embed-certs-175371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:54.865092  326490 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:18:54.888492  326490 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:18:54.888598  326490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:54.953711  326490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 12:18:54.941671438 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:54.953923  326490 docker.go:318] overlay module found
	I1018 12:18:54.958794  326490 out.go:179] * Using the docker driver based on user configuration
	I1018 12:18:54.960013  326490 start.go:305] selected driver: docker
	I1018 12:18:54.960033  326490 start.go:925] validating driver "docker" against <nil>
	I1018 12:18:54.960046  326490 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:18:54.960615  326490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:18:55.022513  326490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 12:18:55.011731081 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:18:55.022798  326490 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 12:18:55.022840  326490 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 12:18:55.023141  326490 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 12:18:55.025322  326490 out.go:179] * Using Docker driver with root privileges
	I1018 12:18:55.026401  326490 cni.go:84] Creating CNI manager for ""
	I1018 12:18:55.026484  326490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:18:55.026498  326490 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:18:55.026560  326490 start.go:349] cluster config:
	{Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:18:55.027938  326490 out.go:179] * Starting "newest-cni-579606" primary control-plane node in "newest-cni-579606" cluster
	I1018 12:18:55.029100  326490 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:18:55.030360  326490 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:18:55.031422  326490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:55.031468  326490 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 12:18:55.031489  326490 cache.go:58] Caching tarball of preloaded images
	I1018 12:18:55.031522  326490 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:18:55.031591  326490 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:18:55.031603  326490 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:18:55.031705  326490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json ...
	I1018 12:18:55.031726  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json: {Name:mk20e362fc30401f09fc034ac5a55088adce3cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:55.053307  326490 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:18:55.053326  326490 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:18:55.053342  326490 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:18:55.053373  326490 start.go:360] acquireMachinesLock for newest-cni-579606: {Name:mk4161cf0bf2eb93a8110dc388332ec9ca8fc5ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:18:55.053467  326490 start.go:364] duration metric: took 78.123µs to acquireMachinesLock for "newest-cni-579606"
	I1018 12:18:55.053489  326490 start.go:93] Provisioning new machine with config: &{Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:18:55.053550  326490 start.go:125] createHost starting for "" (driver="docker")
	W1018 12:18:51.958241  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:18:53.959108  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:18:55.846032  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:18:58.346225  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:18:55.055345  326490 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 12:18:55.055547  326490 start.go:159] libmachine.API.Create for "newest-cni-579606" (driver="docker")
	I1018 12:18:55.055575  326490 client.go:168] LocalClient.Create starting
	I1018 12:18:55.055636  326490 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem
	I1018 12:18:55.055669  326490 main.go:141] libmachine: Decoding PEM data...
	I1018 12:18:55.055683  326490 main.go:141] libmachine: Parsing certificate...
	I1018 12:18:55.055736  326490 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem
	I1018 12:18:55.055773  326490 main.go:141] libmachine: Decoding PEM data...
	I1018 12:18:55.055796  326490 main.go:141] libmachine: Parsing certificate...
	I1018 12:18:55.056153  326490 cli_runner.go:164] Run: docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 12:18:55.073803  326490 cli_runner.go:211] docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 12:18:55.073868  326490 network_create.go:284] running [docker network inspect newest-cni-579606] to gather additional debugging logs...
	I1018 12:18:55.073887  326490 cli_runner.go:164] Run: docker network inspect newest-cni-579606
	W1018 12:18:55.092574  326490 cli_runner.go:211] docker network inspect newest-cni-579606 returned with exit code 1
	I1018 12:18:55.092605  326490 network_create.go:287] error running [docker network inspect newest-cni-579606]: docker network inspect newest-cni-579606: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-579606 not found
	I1018 12:18:55.092623  326490 network_create.go:289] output of [docker network inspect newest-cni-579606]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-579606 not found
	
	** /stderr **
	I1018 12:18:55.092788  326490 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:18:55.111259  326490 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1c78aef7d2ee IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:19:5a:10:36:f4} reservation:<nil>}
	I1018 12:18:55.111908  326490 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6069a4ec9777 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:f7:2a:6b:48:b9} reservation:<nil>}
	I1018 12:18:55.112751  326490 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-670e794a7c9f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:d0:78:df:c7:fd} reservation:<nil>}
	I1018 12:18:55.113423  326490 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8bb34d522296 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:fc:1a:65:23:03} reservation:<nil>}
	I1018 12:18:55.114281  326490 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc7b00}
	I1018 12:18:55.114303  326490 network_create.go:124] attempt to create docker network newest-cni-579606 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 12:18:55.114345  326490 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-579606 newest-cni-579606
	I1018 12:18:55.175643  326490 network_create.go:108] docker network newest-cni-579606 192.168.85.0/24 created
	I1018 12:18:55.175691  326490 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-579606" container
	I1018 12:18:55.175752  326490 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 12:18:55.193582  326490 cli_runner.go:164] Run: docker volume create newest-cni-579606 --label name.minikube.sigs.k8s.io=newest-cni-579606 --label created_by.minikube.sigs.k8s.io=true
	I1018 12:18:55.212499  326490 oci.go:103] Successfully created a docker volume newest-cni-579606
	I1018 12:18:55.212595  326490 cli_runner.go:164] Run: docker run --rm --name newest-cni-579606-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-579606 --entrypoint /usr/bin/test -v newest-cni-579606:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 12:18:55.635994  326490 oci.go:107] Successfully prepared a docker volume newest-cni-579606
	I1018 12:18:55.636038  326490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:18:55.636063  326490 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 12:18:55.636128  326490 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-579606:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 12:18:56.458229  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:18:58.958191  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	I1018 12:19:00.126774  326490 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-579606:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.490575425s)
	I1018 12:19:00.126807  326490 kic.go:203] duration metric: took 4.4907405s to extract preloaded images to volume ...
	W1018 12:19:00.126891  326490 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 12:19:00.126924  326490 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 12:19:00.126991  326490 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 12:19:00.190480  326490 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-579606 --name newest-cni-579606 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-579606 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-579606 --network newest-cni-579606 --ip 192.168.85.2 --volume newest-cni-579606:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 12:19:00.476973  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Running}}
	I1018 12:19:00.495553  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:00.516545  326490 cli_runner.go:164] Run: docker exec newest-cni-579606 stat /var/lib/dpkg/alternatives/iptables
	I1018 12:19:00.562561  326490 oci.go:144] the created container "newest-cni-579606" has a running status.
	I1018 12:19:00.562609  326490 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa...
	I1018 12:19:00.820117  326490 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 12:19:00.854117  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:00.877422  326490 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 12:19:00.877449  326490 kic_runner.go:114] Args: [docker exec --privileged newest-cni-579606 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 12:19:00.925342  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:00.944520  326490 machine.go:93] provisionDockerMachine start ...
	I1018 12:19:00.944616  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:00.964493  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:00.964838  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:00.964858  326490 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:19:01.103775  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-579606
	
	I1018 12:19:01.103807  326490 ubuntu.go:182] provisioning hostname "newest-cni-579606"
	I1018 12:19:01.103880  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.124094  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:01.124376  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:01.124392  326490 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-579606 && echo "newest-cni-579606" | sudo tee /etc/hostname
	I1018 12:19:01.270628  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-579606
	
	I1018 12:19:01.270703  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.289410  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:01.289674  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:01.289696  326490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-579606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-579606/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-579606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:19:01.423556  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:19:01.423583  326490 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:19:01.423603  326490 ubuntu.go:190] setting up certificates
	I1018 12:19:01.423619  326490 provision.go:84] configureAuth start
	I1018 12:19:01.423685  326490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:01.442627  326490 provision.go:143] copyHostCerts
	I1018 12:19:01.442683  326490 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:19:01.442692  326490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:19:01.442779  326490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:19:01.442877  326490 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:19:01.442887  326490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:19:01.442920  326490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:19:01.443028  326490 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:19:01.443058  326490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:19:01.443088  326490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:19:01.443142  326490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.newest-cni-579606 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-579606]
	I1018 12:19:01.605969  326490 provision.go:177] copyRemoteCerts
	I1018 12:19:01.606038  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:19:01.606085  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.625297  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:01.723582  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 12:19:01.744640  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:19:01.763599  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 12:19:01.784423  326490 provision.go:87] duration metric: took 360.788993ms to configureAuth
	I1018 12:19:01.784458  326490 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:19:01.784652  326490 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:01.784752  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:01.804299  326490 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:01.804508  326490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1018 12:19:01.804524  326490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:19:02.051413  326490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:19:02.051436  326490 machine.go:96] duration metric: took 1.106891251s to provisionDockerMachine
	I1018 12:19:02.051444  326490 client.go:171] duration metric: took 6.995862509s to LocalClient.Create
	I1018 12:19:02.051460  326490 start.go:167] duration metric: took 6.995914544s to libmachine.API.Create "newest-cni-579606"
	I1018 12:19:02.051470  326490 start.go:293] postStartSetup for "newest-cni-579606" (driver="docker")
	I1018 12:19:02.051482  326490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:19:02.051542  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:19:02.051582  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.069826  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.169332  326490 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:19:02.173028  326490 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:19:02.173060  326490 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:19:02.173075  326490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:19:02.173131  326490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:19:02.173202  326490 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:19:02.173312  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:19:02.181632  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:19:02.201730  326490 start.go:296] duration metric: took 150.246741ms for postStartSetup
	I1018 12:19:02.202117  326490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:02.220168  326490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json ...
	I1018 12:19:02.220438  326490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:19:02.220477  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.238665  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.333039  326490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:19:02.337804  326490 start.go:128] duration metric: took 7.284234042s to createHost
	I1018 12:19:02.337830  326490 start.go:83] releasing machines lock for "newest-cni-579606", held for 7.284352735s
	I1018 12:19:02.337891  326490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:02.357339  326490 ssh_runner.go:195] Run: cat /version.json
	I1018 12:19:02.357373  326490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:19:02.357386  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.357430  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:02.376606  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.377490  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:02.526194  326490 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:02.532929  326490 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:19:02.568991  326490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:19:02.574362  326490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:19:02.574428  326490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:19:02.602949  326490 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 12:19:02.602987  326490 start.go:495] detecting cgroup driver to use...
	I1018 12:19:02.603019  326490 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:19:02.603065  326490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:19:02.619432  326490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:19:02.632985  326490 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:19:02.633047  326490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:19:02.650953  326490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:19:02.670802  326490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:19:02.756116  326490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:19:02.848839  326490 docker.go:234] disabling docker service ...
	I1018 12:19:02.848900  326490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:19:02.868131  326490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:19:02.881575  326490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:19:02.965443  326490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:19:03.051508  326490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:19:03.064380  326490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:19:03.079484  326490 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:19:03.079554  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.090169  326490 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:19:03.090229  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.099749  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.109431  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.118802  326490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:19:03.127410  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.136357  326490 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.151150  326490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:03.160956  326490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:19:03.169094  326490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:19:03.177522  326490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:03.257714  326490 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:19:03.374283  326490 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:19:03.374356  326490 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:19:03.378571  326490 start.go:563] Will wait 60s for crictl version
	I1018 12:19:03.378624  326490 ssh_runner.go:195] Run: which crictl
	I1018 12:19:03.382638  326490 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:19:03.406896  326490 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:19:03.406996  326490 ssh_runner.go:195] Run: crio --version
	I1018 12:19:03.436202  326490 ssh_runner.go:195] Run: crio --version
	I1018 12:19:03.466606  326490 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:19:03.468046  326490 cli_runner.go:164] Run: docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:19:03.485613  326490 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 12:19:03.489792  326490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:19:03.502123  326490 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1018 12:19:00.846128  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	W1018 12:19:03.345904  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:19:03.503451  326490 kubeadm.go:883] updating cluster {Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:19:03.503568  326490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:19:03.503623  326490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:19:03.537963  326490 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:19:03.537988  326490 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:19:03.538037  326490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:19:03.564020  326490 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:19:03.564061  326490 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:19:03.564071  326490 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 12:19:03.564172  326490 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-579606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:19:03.564251  326490 ssh_runner.go:195] Run: crio config
	I1018 12:19:03.609404  326490 cni.go:84] Creating CNI manager for ""
	I1018 12:19:03.609430  326490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:19:03.609446  326490 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 12:19:03.609473  326490 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-579606 NodeName:newest-cni-579606 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:19:03.609666  326490 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-579606"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:19:03.609744  326490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:19:03.618201  326490 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:19:03.618283  326490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:19:03.626679  326490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 12:19:03.639983  326490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:19:03.655953  326490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1018 12:19:03.668846  326490 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:19:03.672666  326490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:19:03.683073  326490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:03.766600  326490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:19:03.797248  326490 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606 for IP: 192.168.85.2
	I1018 12:19:03.797269  326490 certs.go:195] generating shared ca certs ...
	I1018 12:19:03.797296  326490 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:03.797445  326490 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:19:03.797500  326490 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:19:03.797513  326490 certs.go:257] generating profile certs ...
	I1018 12:19:03.797585  326490 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key
	I1018 12:19:03.797609  326490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.crt with IP's: []
	I1018 12:19:04.196975  326490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.crt ...
	I1018 12:19:04.197011  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.crt: {Name:mka42a654d079c2a23058a0f14154e8b79ca5459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.197222  326490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key ...
	I1018 12:19:04.197241  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key: {Name:mk220b04a2afae0bcb10852575c558c1404f1005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.197355  326490 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad
	I1018 12:19:04.197378  326490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 12:19:04.310285  326490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad ...
	I1018 12:19:04.310312  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad: {Name:mke978bbcfe8f1a2cbf3531371f43b4028ef678e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.310509  326490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad ...
	I1018 12:19:04.310528  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad: {Name:mk42b24c0f6b076eda0e07dce8424a94f5271da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.310658  326490 certs.go:382] copying /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt.54335aad -> /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt
	I1018 12:19:04.310784  326490 certs.go:386] copying /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad -> /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key
	I1018 12:19:04.310873  326490 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key
	I1018 12:19:04.310898  326490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt with IP's: []
	I1018 12:19:04.385339  326490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt ...
	I1018 12:19:04.385370  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt: {Name:mk66f445c5bca9cdd3c55e6ee197ee7cb14dae9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.385567  326490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key ...
	I1018 12:19:04.385584  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key: {Name:mk29fee630df834569bfa6e21a7cc861705c1451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:04.385849  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem (1338 bytes)
	W1018 12:19:04.385893  326490 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360_empty.pem, impossibly tiny 0 bytes
	I1018 12:19:04.385908  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:19:04.385940  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:19:04.385972  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:19:04.386016  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 12:19:04.386076  326490 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:19:04.386584  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:19:04.405651  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:19:04.423574  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:19:04.441442  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:19:04.460483  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 12:19:04.478325  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:19:04.496004  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:19:04.514077  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:19:04.532154  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem --> /usr/share/ca-certificates/9360.pem (1338 bytes)
	I1018 12:19:04.552898  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /usr/share/ca-certificates/93602.pem (1708 bytes)
	I1018 12:19:04.572871  326490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:19:04.593879  326490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:19:04.608514  326490 ssh_runner.go:195] Run: openssl version
	I1018 12:19:04.615149  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93602.pem && ln -fs /usr/share/ca-certificates/93602.pem /etc/ssl/certs/93602.pem"
	I1018 12:19:04.624305  326490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93602.pem
	I1018 12:19:04.628375  326490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/93602.pem
	I1018 12:19:04.628425  326490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93602.pem
	I1018 12:19:04.663623  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93602.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:19:04.673411  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:19:04.682605  326490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:04.686974  326490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:04.687061  326490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:04.724063  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:19:04.733543  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9360.pem && ln -fs /usr/share/ca-certificates/9360.pem /etc/ssl/certs/9360.pem"
	I1018 12:19:04.742538  326490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9360.pem
	I1018 12:19:04.746549  326490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9360.pem
	I1018 12:19:04.746601  326490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9360.pem
	I1018 12:19:04.781517  326490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9360.pem /etc/ssl/certs/51391683.0"
	I1018 12:19:04.791034  326490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:19:04.794955  326490 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:19:04.795012  326490 kubeadm.go:400] StartCluster: {Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:19:04.795092  326490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:04.795154  326490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:04.823284  326490 cri.go:89] found id: ""
	I1018 12:19:04.823356  326490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:19:04.832075  326490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:19:04.840408  326490 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 12:19:04.840478  326490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	W1018 12:19:00.958896  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:03.459593  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:05.845166  317167 pod_ready.go:104] pod "coredns-66bc5c9577-7qgqj" is not "Ready", error: <nil>
	I1018 12:19:07.344832  317167 pod_ready.go:94] pod "coredns-66bc5c9577-7qgqj" is "Ready"
	I1018 12:19:07.344882  317167 pod_ready.go:86] duration metric: took 37.505154401s for pod "coredns-66bc5c9577-7qgqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.347549  317167 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.351825  317167 pod_ready.go:94] pod "etcd-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:07.351851  317167 pod_ready.go:86] duration metric: took 4.270969ms for pod "etcd-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.353893  317167 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.357781  317167 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:07.357802  317167 pod_ready.go:86] duration metric: took 3.889439ms for pod "kube-apiserver-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.359743  317167 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.543689  317167 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:07.543718  317167 pod_ready.go:86] duration metric: took 183.92899ms for pod "kube-controller-manager-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:07.742726  317167 pod_ready.go:83] waiting for pod "kube-proxy-bffkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.142748  317167 pod_ready.go:94] pod "kube-proxy-bffkr" is "Ready"
	I1018 12:19:08.142797  317167 pod_ready.go:86] duration metric: took 400.045074ms for pod "kube-proxy-bffkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.343168  317167 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.743587  317167 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-028309" is "Ready"
	I1018 12:19:08.743618  317167 pod_ready.go:86] duration metric: took 400.420854ms for pod "kube-scheduler-default-k8s-diff-port-028309" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:08.743633  317167 pod_ready.go:40] duration metric: took 38.908363338s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:19:08.790224  317167 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:08.792295  317167 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-028309" cluster and "default" namespace by default
	I1018 12:19:04.849545  326490 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:19:04.849562  326490 kubeadm.go:157] found existing configuration files:
	
	I1018 12:19:04.849600  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:19:04.857827  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:19:04.857889  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:19:04.865939  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:19:04.873915  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:19:04.873983  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:19:04.881861  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:19:04.890019  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:19:04.890088  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:19:04.898082  326490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:19:04.906181  326490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:19:04.906236  326490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:19:04.914044  326490 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 12:19:04.975919  326490 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 12:19:05.037824  326490 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1018 12:19:05.957990  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:07.958857  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:09.958915  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	W1018 12:19:12.459097  319485 pod_ready.go:104] pod "coredns-66bc5c9577-b6h9l" is not "Ready", error: <nil>
	I1018 12:19:14.458133  319485 pod_ready.go:94] pod "coredns-66bc5c9577-b6h9l" is "Ready"
	I1018 12:19:14.458159  319485 pod_ready.go:86] duration metric: took 31.505202758s for pod "coredns-66bc5c9577-b6h9l" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.459959  319485 pod_ready.go:83] waiting for pod "etcd-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.463248  319485 pod_ready.go:94] pod "etcd-embed-certs-175371" is "Ready"
	I1018 12:19:14.463270  319485 pod_ready.go:86] duration metric: took 3.284914ms for pod "etcd-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.465089  319485 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.468551  319485 pod_ready.go:94] pod "kube-apiserver-embed-certs-175371" is "Ready"
	I1018 12:19:14.468570  319485 pod_ready.go:86] duration metric: took 3.458555ms for pod "kube-apiserver-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.470303  319485 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.657339  319485 pod_ready.go:94] pod "kube-controller-manager-embed-certs-175371" is "Ready"
	I1018 12:19:14.657367  319485 pod_ready.go:86] duration metric: took 187.044696ms for pod "kube-controller-manager-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:14.856446  319485 pod_ready.go:83] waiting for pod "kube-proxy-t2x4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.257025  319485 pod_ready.go:94] pod "kube-proxy-t2x4c" is "Ready"
	I1018 12:19:15.257053  319485 pod_ready.go:86] duration metric: took 400.581639ms for pod "kube-proxy-t2x4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.456953  319485 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.893038  326490 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 12:19:15.893090  326490 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 12:19:15.893217  326490 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 12:19:15.893353  326490 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 12:19:15.893498  326490 kubeadm.go:318] OS: Linux
	I1018 12:19:15.893566  326490 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 12:19:15.893627  326490 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 12:19:15.893696  326490 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 12:19:15.893776  326490 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 12:19:15.893850  326490 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 12:19:15.893910  326490 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 12:19:15.893969  326490 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 12:19:15.894035  326490 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 12:19:15.894133  326490 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 12:19:15.894281  326490 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 12:19:15.894412  326490 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 12:19:15.894516  326490 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 12:19:15.896254  326490 out.go:252]   - Generating certificates and keys ...
	I1018 12:19:15.896337  326490 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 12:19:15.896412  326490 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 12:19:15.896489  326490 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 12:19:15.896543  326490 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:19:15.896599  326490 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:19:15.896657  326490 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:19:15.896708  326490 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:19:15.896861  326490 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-579606] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 12:19:15.896916  326490 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:19:15.897021  326490 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-579606] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 12:19:15.897080  326490 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:19:15.897134  326490 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:19:15.897176  326490 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:19:15.897227  326490 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:19:15.897280  326490 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:19:15.897332  326490 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:19:15.897378  326490 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:19:15.897435  326490 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:19:15.897486  326490 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:19:15.897560  326490 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:19:15.897622  326490 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 12:19:15.899813  326490 out.go:252]   - Booting up control plane ...
	I1018 12:19:15.899904  326490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:19:15.899977  326490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:19:15.900053  326490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:19:15.900169  326490 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:19:15.900307  326490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:19:15.900475  326490 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:19:15.900586  326490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:19:15.900647  326490 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:19:15.900835  326490 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:19:15.900980  326490 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:19:15.901059  326490 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501237256s
	I1018 12:19:15.901160  326490 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:19:15.901257  326490 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 12:19:15.901388  326490 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:19:15.901499  326490 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:19:15.901562  326490 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.520322183s
	I1018 12:19:15.901615  326490 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.051874304s
	I1018 12:19:15.901668  326490 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001667177s
	I1018 12:19:15.901817  326490 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:19:15.902084  326490 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:19:15.902160  326490 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:19:15.902393  326490 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-579606 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:19:15.902484  326490 kubeadm.go:318] [bootstrap-token] Using token: pmkr01.67na6m3iuf7b6wke
	I1018 12:19:15.904615  326490 out.go:252]   - Configuring RBAC rules ...
	I1018 12:19:15.904796  326490 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 12:19:15.904875  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 12:19:15.905028  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 12:19:15.905156  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 12:19:15.905290  326490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 12:19:15.905391  326490 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 12:19:15.905553  326490 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 12:19:15.905613  326490 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 12:19:15.905676  326490 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 12:19:15.905684  326490 kubeadm.go:318] 
	I1018 12:19:15.905730  326490 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 12:19:15.905736  326490 kubeadm.go:318] 
	I1018 12:19:15.905836  326490 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 12:19:15.905852  326490 kubeadm.go:318] 
	I1018 12:19:15.905891  326490 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 12:19:15.905967  326490 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 12:19:15.906032  326490 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 12:19:15.906040  326490 kubeadm.go:318] 
	I1018 12:19:15.906120  326490 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 12:19:15.906130  326490 kubeadm.go:318] 
	I1018 12:19:15.906195  326490 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 12:19:15.906216  326490 kubeadm.go:318] 
	I1018 12:19:15.906289  326490 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 12:19:15.906393  326490 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 12:19:15.906490  326490 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 12:19:15.906500  326490 kubeadm.go:318] 
	I1018 12:19:15.906596  326490 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 12:19:15.906826  326490 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 12:19:15.906844  326490 kubeadm.go:318] 
	I1018 12:19:15.906936  326490 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token pmkr01.67na6m3iuf7b6wke \
	I1018 12:19:15.907119  326490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d \
	I1018 12:19:15.907164  326490 kubeadm.go:318] 	--control-plane 
	I1018 12:19:15.907173  326490 kubeadm.go:318] 
	I1018 12:19:15.907323  326490 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 12:19:15.907337  326490 kubeadm.go:318] 
	I1018 12:19:15.907436  326490 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token pmkr01.67na6m3iuf7b6wke \
	I1018 12:19:15.907606  326490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4cbf75768df6c8067a68cd6b508a8fe660e400590ab42f5d809bc424c0e78a6d 
	I1018 12:19:15.907623  326490 cni.go:84] Creating CNI manager for ""
	I1018 12:19:15.907632  326490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:19:15.857063  319485 pod_ready.go:94] pod "kube-scheduler-embed-certs-175371" is "Ready"
	I1018 12:19:15.857091  319485 pod_ready.go:86] duration metric: took 400.110605ms for pod "kube-scheduler-embed-certs-175371" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:19:15.857103  319485 pod_ready.go:40] duration metric: took 32.907623738s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:19:15.908233  319485 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:15.909420  326490 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 12:19:15.910368  319485 out.go:179] * Done! kubectl is now configured to use "embed-certs-175371" cluster and "default" namespace by default
	I1018 12:19:15.911428  326490 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 12:19:15.916203  326490 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 12:19:15.916223  326490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 12:19:15.930716  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 12:19:16.186811  326490 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:19:16.186877  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:16.186927  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-579606 minikube.k8s.io/updated_at=2025_10_18T12_19_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=newest-cni-579606 minikube.k8s.io/primary=true
	I1018 12:19:16.200483  326490 ops.go:34] apiserver oom_adj: -16
	I1018 12:19:16.289962  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:16.790297  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:17.290815  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:17.790675  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:18.290971  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:18.791051  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:19.291007  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:19.790041  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:20.290948  326490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:19:20.364194  326490 kubeadm.go:1113] duration metric: took 4.177366872s to wait for elevateKubeSystemPrivileges
	I1018 12:19:20.364236  326490 kubeadm.go:402] duration metric: took 15.569226889s to StartCluster
	I1018 12:19:20.364257  326490 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:20.364341  326490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:19:20.366539  326490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:20.366808  326490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 12:19:20.366823  326490 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:19:20.366886  326490 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:19:20.366978  326490 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-579606"
	I1018 12:19:20.366998  326490 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-579606"
	I1018 12:19:20.367029  326490 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:20.367046  326490 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:20.367047  326490 addons.go:69] Setting default-storageclass=true in profile "newest-cni-579606"
	I1018 12:19:20.367088  326490 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-579606"
	I1018 12:19:20.367465  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:20.367552  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:20.368575  326490 out.go:179] * Verifying Kubernetes components...
	I1018 12:19:20.370326  326490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:20.394477  326490 addons.go:238] Setting addon default-storageclass=true in "newest-cni-579606"
	I1018 12:19:20.394522  326490 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:20.394869  326490 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:19:20.395017  326490 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:20.396676  326490 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:19:20.396702  326490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:19:20.396772  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:20.423305  326490 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:19:20.423405  326490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:19:20.423499  326490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:20.423817  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:20.453744  326490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:20.465106  326490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 12:19:20.532388  326490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:19:20.546306  326490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:19:20.568683  326490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:19:20.669063  326490 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 12:19:20.670556  326490 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:19:20.670609  326490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:19:20.899558  326490 api_server.go:72] duration metric: took 532.701277ms to wait for apiserver process to appear ...
	I1018 12:19:20.899596  326490 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:19:20.899623  326490 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:20.906703  326490 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 12:19:20.907612  326490 api_server.go:141] control plane version: v1.34.1
	I1018 12:19:20.907641  326490 api_server.go:131] duration metric: took 8.037799ms to wait for apiserver health ...
	I1018 12:19:20.907652  326490 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:19:20.909941  326490 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 12:19:20.911175  326490 addons.go:514] duration metric: took 544.288646ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 12:19:20.911194  326490 system_pods.go:59] 8 kube-system pods found
	I1018 12:19:20.911217  326490 system_pods.go:61] "coredns-66bc5c9577-p6bts" [49609244-6dc2-4950-8fad-8240b827ecca] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 12:19:20.911224  326490 system_pods.go:61] "etcd-newest-cni-579606" [496c00b4-7ad1-40c0-a440-c396a752cbf4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:19:20.911231  326490 system_pods.go:61] "kindnet-2c4t6" [08c0018d-0f0f-435e-8868-31818d5639fa] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:19:20.911238  326490 system_pods.go:61] "kube-apiserver-newest-cni-579606" [a39961c7-019e-41ec-8843-e98e9c2e3604] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:19:20.911249  326490 system_pods.go:61] "kube-controller-manager-newest-cni-579606" [992bd82d-6489-43da-83ba-8dcb6b86fe48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:19:20.911262  326490 system_pods.go:61] "kube-proxy-5hjgn" [915df613-23ce-49e2-b125-d223024077b0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:19:20.911291  326490 system_pods.go:61] "kube-scheduler-newest-cni-579606" [2a1de39e-4fa6-49e8-a420-75a6c82ac73e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:19:20.911306  326490 system_pods.go:61] "storage-provisioner" [c7ff4c04-56e5-469b-9af2-dc1bf4fe969d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 12:19:20.911314  326490 system_pods.go:74] duration metric: took 3.655766ms to wait for pod list to return data ...
	I1018 12:19:20.911324  326490 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:19:20.913681  326490 default_sa.go:45] found service account: "default"
	I1018 12:19:20.913702  326490 default_sa.go:55] duration metric: took 2.371901ms for default service account to be created ...
	I1018 12:19:20.913712  326490 kubeadm.go:586] duration metric: took 546.861004ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 12:19:20.913730  326490 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:19:20.916084  326490 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:19:20.916105  326490 node_conditions.go:123] node cpu capacity is 8
	I1018 12:19:20.916117  326490 node_conditions.go:105] duration metric: took 2.382506ms to run NodePressure ...
	I1018 12:19:20.916128  326490 start.go:241] waiting for startup goroutines ...
	I1018 12:19:21.173827  326490 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-579606" context rescaled to 1 replicas
	I1018 12:19:21.173870  326490 start.go:246] waiting for cluster config update ...
	I1018 12:19:21.173882  326490 start.go:255] writing updated cluster config ...
	I1018 12:19:21.174193  326490 ssh_runner.go:195] Run: rm -f paused
	I1018 12:19:21.223166  326490 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:21.225317  326490 out.go:179] * Done! kubectl is now configured to use "newest-cni-579606" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 12:18:52 embed-certs-175371 crio[563]: time="2025-10-18T12:18:52.83581025Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 12:18:52 embed-certs-175371 crio[563]: time="2025-10-18T12:18:52.841064206Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 12:18:52 embed-certs-175371 crio[563]: time="2025-10-18T12:18:52.841099677Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 12:19:08 embed-certs-175371 crio[563]: time="2025-10-18T12:19:08.971527464Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=26476176-3a62-42b3-8229-a6220e267d02 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:08 embed-certs-175371 crio[563]: time="2025-10-18T12:19:08.972370076Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=67eafd4c-6e74-455c-90d5-489c3fe4e746 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:08 embed-certs-175371 crio[563]: time="2025-10-18T12:19:08.973383703Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp/dashboard-metrics-scraper" id=a79e4de2-7321-4913-a72e-839ca1577dc7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:08 embed-certs-175371 crio[563]: time="2025-10-18T12:19:08.9736505Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:08 embed-certs-175371 crio[563]: time="2025-10-18T12:19:08.979524297Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:08 embed-certs-175371 crio[563]: time="2025-10-18T12:19:08.9801566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:09 embed-certs-175371 crio[563]: time="2025-10-18T12:19:09.015903294Z" level=info msg="Created container a405ad4e1a98a18fc499624c47306f6d1cc7a55bbfa44133264e1b27d5551889: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp/dashboard-metrics-scraper" id=a79e4de2-7321-4913-a72e-839ca1577dc7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:09 embed-certs-175371 crio[563]: time="2025-10-18T12:19:09.016502614Z" level=info msg="Starting container: a405ad4e1a98a18fc499624c47306f6d1cc7a55bbfa44133264e1b27d5551889" id=b13f7d98-e8c1-4727-ac49-75fdf3732d8b name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:19:09 embed-certs-175371 crio[563]: time="2025-10-18T12:19:09.018465646Z" level=info msg="Started container" PID=1757 containerID=a405ad4e1a98a18fc499624c47306f6d1cc7a55bbfa44133264e1b27d5551889 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp/dashboard-metrics-scraper id=b13f7d98-e8c1-4727-ac49-75fdf3732d8b name=/runtime.v1.RuntimeService/StartContainer sandboxID=2ff71eac7916d9257d2f13c089cac003c220048e18ea9eef187c68409dc9a69a
	Oct 18 12:19:09 embed-certs-175371 crio[563]: time="2025-10-18T12:19:09.089271029Z" level=info msg="Removing container: 9f9b17ff004c953db0bb0dbb859d0cc12c3e095d59cd5ee238a91807668dc4bb" id=0f7015a0-a0ea-458b-bde0-9cd97bc7ccf0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:19:09 embed-certs-175371 crio[563]: time="2025-10-18T12:19:09.099916687Z" level=info msg="Removed container 9f9b17ff004c953db0bb0dbb859d0cc12c3e095d59cd5ee238a91807668dc4bb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp/dashboard-metrics-scraper" id=0f7015a0-a0ea-458b-bde0-9cd97bc7ccf0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.096358873Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9ee81e05-cf9f-42f4-9214-9731df8c46c8 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.09736587Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f92fe0a1-9104-47d1-9429-9b6131cfdedc name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.098470276Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a28a4310-d4d2-45da-834c-caa96eca0d52 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.098740271Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.103222352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.103423555Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/144b12b37946c45001c97e144b72befff90afcada575307e35051e2228472cee/merged/etc/passwd: no such file or directory"
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.103461831Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/144b12b37946c45001c97e144b72befff90afcada575307e35051e2228472cee/merged/etc/group: no such file or directory"
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.103740363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.129124977Z" level=info msg="Created container 5617debabda54b03bff0f372472919af6a9bb3bbcbc514242b26a2064697ae59: kube-system/storage-provisioner/storage-provisioner" id=a28a4310-d4d2-45da-834c-caa96eca0d52 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.129813492Z" level=info msg="Starting container: 5617debabda54b03bff0f372472919af6a9bb3bbcbc514242b26a2064697ae59" id=9bca1842-7053-4219-9a80-b77fa0488ab5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:19:13 embed-certs-175371 crio[563]: time="2025-10-18T12:19:13.13182002Z" level=info msg="Started container" PID=1771 containerID=5617debabda54b03bff0f372472919af6a9bb3bbcbc514242b26a2064697ae59 description=kube-system/storage-provisioner/storage-provisioner id=9bca1842-7053-4219-9a80-b77fa0488ab5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=18feedd3d7c26e7a2eff27f48d91e337915e0f785c90e299345c24a3ea528fed
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5617debabda54       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   18feedd3d7c26       storage-provisioner                          kube-system
	a405ad4e1a98a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   2ff71eac7916d       dashboard-metrics-scraper-6ffb444bf9-24czp   kubernetes-dashboard
	cb1a3164b004d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   eb7ea3ab23330       kubernetes-dashboard-855c9754f9-z4wqj        kubernetes-dashboard
	81b540825c9eb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   cb308e2134534       busybox                                      default
	f6306f9162a1d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   09269391a70af       coredns-66bc5c9577-b6h9l                     kube-system
	4fc9ce5175d37       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           49 seconds ago      Running             kube-proxy                  0                   d825774c10f73       kube-proxy-t2x4c                             kube-system
	36a5bde68e89d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   4ac436233cd3e       kindnet-dxw8r                                kube-system
	ef18b0bcad14e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   18feedd3d7c26       storage-provisioner                          kube-system
	7eed71db702f7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           52 seconds ago      Running             etcd                        0                   1dca7b19b01ff       etcd-embed-certs-175371                      kube-system
	8b43d4c98eba6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           52 seconds ago      Running             kube-apiserver              0                   42a4e0109b4ba       kube-apiserver-embed-certs-175371            kube-system
	d82c539cae499       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           52 seconds ago      Running             kube-scheduler              0                   be01ebffb564c       kube-scheduler-embed-certs-175371            kube-system
	a474582c739fe       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           52 seconds ago      Running             kube-controller-manager     0                   3e5898b103599       kube-controller-manager-embed-certs-175371   kube-system
	
	
	==> coredns [f6306f9162a1d28042bad4e6da438c5462874638b4d0624b07e6465f0c518b7e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54496 - 19579 "HINFO IN 390884335358352546.2896067784334696330. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.030583319s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-175371
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-175371
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=embed-certs-175371
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_17_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:17:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-175371
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:19:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:19:12 +0000   Sat, 18 Oct 2025 12:17:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:19:12 +0000   Sat, 18 Oct 2025 12:17:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:19:12 +0000   Sat, 18 Oct 2025 12:17:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:19:12 +0000   Sat, 18 Oct 2025 12:17:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-175371
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                d2c06e1f-4c4f-4264-8151-34f2c71eddce
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-b6h9l                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m14s
	  kube-system                 etcd-embed-certs-175371                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m20s
	  kube-system                 kindnet-dxw8r                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m15s
	  kube-system                 kube-apiserver-embed-certs-175371             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-controller-manager-embed-certs-175371    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-t2x4c                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-scheduler-embed-certs-175371             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-24czp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-z4wqj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m13s                  kube-proxy       
	  Normal  Starting                 49s                    kube-proxy       
	  Normal  Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m25s (x8 over 2m25s)  kubelet          Node embed-certs-175371 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m25s (x8 over 2m25s)  kubelet          Node embed-certs-175371 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m25s (x8 over 2m25s)  kubelet          Node embed-certs-175371 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m20s                  kubelet          Node embed-certs-175371 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m20s                  kubelet          Node embed-certs-175371 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m20s                  kubelet          Node embed-certs-175371 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m20s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m16s                  node-controller  Node embed-certs-175371 event: Registered Node embed-certs-175371 in Controller
	  Normal  NodeReady                94s                    kubelet          Node embed-certs-175371 status is now: NodeReady
	  Normal  Starting                 54s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 54s)      kubelet          Node embed-certs-175371 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 54s)      kubelet          Node embed-certs-175371 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 54s)      kubelet          Node embed-certs-175371 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                    node-controller  Node embed-certs-175371 event: Registered Node embed-certs-175371 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +11.948953] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 93 07 de 40 6d 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 2f a5 3a 37 fc 08 06
	[  +0.204454] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 4b 47 1f ce e5 08 06
	[Oct18 12:16] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 88 62 1b dd a7 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 f1 aa 42 b3 1d 08 06
	[  +0.000901] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +26.035563] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 9e 15 3f 0e e1 08 06
	[  +0.000631] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 55 46 ae a1 7f 08 06
	[  +2.492998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 63 10 7e 7b f1 08 06
	[  +0.001695] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	[ +18.118461] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e eb 77 72 c6 18 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	
	
	==> etcd [7eed71db702f71ba8ac1b3a4f95bf0e94d637c0237e59764412e0610aff6eddd] <==
	{"level":"warn","ts":"2025-10-18T12:18:40.722571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.729260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.735578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.745131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.752729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.759099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.766088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.783955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.792718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.800080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.806892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.814308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.821756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.828334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.835429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.842239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.856900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.865140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.880650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.886959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.894332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.911319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.918001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.924553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:40.970182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42696","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:19:32 up  1:01,  0 user,  load average: 3.11, 3.83, 2.60
	Linux embed-certs-175371 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [36a5bde68e89db4b5596d0782075e0d814c39bdb4c4812f2188ab8957137475e] <==
	I1018 12:18:42.516931       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:18:42.517687       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 12:18:42.517913       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:18:42.517936       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:18:42.517959       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:18:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:18:42.812796       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:18:42.813697       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:18:42.813721       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:18:42.813898       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:18:43.114172       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:18:43.114195       1 metrics.go:72] Registering metrics
	I1018 12:18:43.114242       1 controller.go:711] "Syncing nftables rules"
	I1018 12:18:52.813029       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 12:18:52.813085       1 main.go:301] handling current node
	I1018 12:19:02.816855       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 12:19:02.816885       1 main.go:301] handling current node
	I1018 12:19:12.811954       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 12:19:12.811991       1 main.go:301] handling current node
	I1018 12:19:22.818875       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 12:19:22.818920       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8b43d4c98eba66467fa5b9aa2bd7f75a53d098d4dc11c9ca9578904769346b5e] <==
	I1018 12:18:41.451393       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 12:18:41.451401       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 12:18:41.451439       1 aggregator.go:171] initial CRD sync complete...
	I1018 12:18:41.451448       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 12:18:41.451454       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 12:18:41.451460       1 cache.go:39] Caches are synced for autoregister controller
	I1018 12:18:41.451544       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 12:18:41.451678       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 12:18:41.454439       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 12:18:41.457470       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 12:18:41.470571       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 12:18:41.482010       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:18:41.493107       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:18:41.530311       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 12:18:41.703722       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:18:41.735780       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:18:41.758441       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:18:41.767620       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:18:41.777682       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:18:41.813438       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.94.86"}
	I1018 12:18:41.826162       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.211.155"}
	I1018 12:18:42.358197       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:18:45.136249       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:18:45.231410       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:18:45.383497       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a474582c739fed0fe5717b996a3fc2e3a1f0f913711f6e7f996ecc56104a314f] <==
	I1018 12:18:44.757405       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 12:18:44.757487       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:18:44.758091       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:18:44.779663       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 12:18:44.779686       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 12:18:44.779675       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 12:18:44.779861       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 12:18:44.779916       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 12:18:44.780912       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 12:18:44.780937       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 12:18:44.781001       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 12:18:44.781558       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 12:18:44.782815       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 12:18:44.784183       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 12:18:44.784327       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:18:44.786362       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 12:18:44.786404       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 12:18:44.786433       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 12:18:44.786487       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 12:18:44.786493       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 12:18:44.786498       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 12:18:44.788081       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:18:44.790324       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 12:18:44.792597       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 12:18:44.802922       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4fc9ce5175d3764f8e0fb91e099e901a2302dfd2ff50d4abfb0a9edeb71386f9] <==
	I1018 12:18:42.376048       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:18:42.438173       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:18:42.538657       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:18:42.538710       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 12:18:42.538808       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:18:42.561745       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:18:42.561820       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:18:42.568657       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:18:42.569231       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:18:42.569254       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:18:42.570622       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:18:42.570650       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:18:42.570684       1 config.go:200] "Starting service config controller"
	I1018 12:18:42.570729       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:18:42.570728       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:18:42.570745       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:18:42.570989       1 config.go:309] "Starting node config controller"
	I1018 12:18:42.571003       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:18:42.671520       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:18:42.671555       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:18:42.671529       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:18:42.671582       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [d82c539cae49915538e61bf60b7ade17e61db3edc660d10570b58552a6175d40] <==
	I1018 12:18:41.414640       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 12:18:41.414679       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:18:41.418106       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:18:41.418145       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:18:41.418233       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:18:41.418381       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:18:41.431162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 12:18:41.434890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:18:41.435055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:18:41.436145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:18:41.436254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:18:41.436367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:18:41.436447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:18:41.437128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:18:41.436582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:18:41.436642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:18:41.436811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:18:41.436985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:18:41.437056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:18:41.436520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:18:41.437217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:18:41.437441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:18:41.437550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:18:41.438047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1018 12:18:42.319397       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:18:45 embed-certs-175371 kubelet[723]: I1018 12:18:45.301508     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z9nh\" (UniqueName: \"kubernetes.io/projected/a954deab-5a8a-4354-9e53-7ac4c92d040f-kube-api-access-4z9nh\") pod \"dashboard-metrics-scraper-6ffb444bf9-24czp\" (UID: \"a954deab-5a8a-4354-9e53-7ac4c92d040f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp"
	Oct 18 12:18:45 embed-certs-175371 kubelet[723]: I1018 12:18:45.301526     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a954deab-5a8a-4354-9e53-7ac4c92d040f-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-24czp\" (UID: \"a954deab-5a8a-4354-9e53-7ac4c92d040f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp"
	Oct 18 12:18:45 embed-certs-175371 kubelet[723]: I1018 12:18:45.301547     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9162a212-7249-4ae3-a9ee-877a66ae4adf-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-z4wqj\" (UID: \"9162a212-7249-4ae3-a9ee-877a66ae4adf\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z4wqj"
	Oct 18 12:18:49 embed-certs-175371 kubelet[723]: I1018 12:18:49.023695     723 scope.go:117] "RemoveContainer" containerID="e2d68f39dd5ab27c50cfd823b70df7f3b6aed834bd32c61c6da1199a2135cc4c"
	Oct 18 12:18:50 embed-certs-175371 kubelet[723]: I1018 12:18:50.029603     723 scope.go:117] "RemoveContainer" containerID="e2d68f39dd5ab27c50cfd823b70df7f3b6aed834bd32c61c6da1199a2135cc4c"
	Oct 18 12:18:50 embed-certs-175371 kubelet[723]: I1018 12:18:50.030701     723 scope.go:117] "RemoveContainer" containerID="9f9b17ff004c953db0bb0dbb859d0cc12c3e095d59cd5ee238a91807668dc4bb"
	Oct 18 12:18:50 embed-certs-175371 kubelet[723]: E1018 12:18:50.031376     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-24czp_kubernetes-dashboard(a954deab-5a8a-4354-9e53-7ac4c92d040f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp" podUID="a954deab-5a8a-4354-9e53-7ac4c92d040f"
	Oct 18 12:18:51 embed-certs-175371 kubelet[723]: I1018 12:18:51.032436     723 scope.go:117] "RemoveContainer" containerID="9f9b17ff004c953db0bb0dbb859d0cc12c3e095d59cd5ee238a91807668dc4bb"
	Oct 18 12:18:51 embed-certs-175371 kubelet[723]: E1018 12:18:51.032609     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-24czp_kubernetes-dashboard(a954deab-5a8a-4354-9e53-7ac4c92d040f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp" podUID="a954deab-5a8a-4354-9e53-7ac4c92d040f"
	Oct 18 12:18:54 embed-certs-175371 kubelet[723]: I1018 12:18:54.559918     723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z4wqj" podStartSLOduration=3.816114155 podStartE2EDuration="9.559890666s" podCreationTimestamp="2025-10-18 12:18:45 +0000 UTC" firstStartedPulling="2025-10-18 12:18:45.535653359 +0000 UTC m=+6.657332094" lastFinishedPulling="2025-10-18 12:18:51.279429856 +0000 UTC m=+12.401108605" observedRunningTime="2025-10-18 12:18:52.046564184 +0000 UTC m=+13.168242958" watchObservedRunningTime="2025-10-18 12:18:54.559890666 +0000 UTC m=+15.681569422"
	Oct 18 12:18:55 embed-certs-175371 kubelet[723]: I1018 12:18:55.088342     723 scope.go:117] "RemoveContainer" containerID="9f9b17ff004c953db0bb0dbb859d0cc12c3e095d59cd5ee238a91807668dc4bb"
	Oct 18 12:18:55 embed-certs-175371 kubelet[723]: E1018 12:18:55.088570     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-24czp_kubernetes-dashboard(a954deab-5a8a-4354-9e53-7ac4c92d040f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp" podUID="a954deab-5a8a-4354-9e53-7ac4c92d040f"
	Oct 18 12:19:08 embed-certs-175371 kubelet[723]: I1018 12:19:08.971136     723 scope.go:117] "RemoveContainer" containerID="9f9b17ff004c953db0bb0dbb859d0cc12c3e095d59cd5ee238a91807668dc4bb"
	Oct 18 12:19:09 embed-certs-175371 kubelet[723]: I1018 12:19:09.083607     723 scope.go:117] "RemoveContainer" containerID="9f9b17ff004c953db0bb0dbb859d0cc12c3e095d59cd5ee238a91807668dc4bb"
	Oct 18 12:19:09 embed-certs-175371 kubelet[723]: I1018 12:19:09.083974     723 scope.go:117] "RemoveContainer" containerID="a405ad4e1a98a18fc499624c47306f6d1cc7a55bbfa44133264e1b27d5551889"
	Oct 18 12:19:09 embed-certs-175371 kubelet[723]: E1018 12:19:09.084344     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-24czp_kubernetes-dashboard(a954deab-5a8a-4354-9e53-7ac4c92d040f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp" podUID="a954deab-5a8a-4354-9e53-7ac4c92d040f"
	Oct 18 12:19:13 embed-certs-175371 kubelet[723]: I1018 12:19:13.095872     723 scope.go:117] "RemoveContainer" containerID="ef18b0bcad14e848b1c27658083f65d022651b906dddfc0ef264638b57310d83"
	Oct 18 12:19:15 embed-certs-175371 kubelet[723]: I1018 12:19:15.089282     723 scope.go:117] "RemoveContainer" containerID="a405ad4e1a98a18fc499624c47306f6d1cc7a55bbfa44133264e1b27d5551889"
	Oct 18 12:19:15 embed-certs-175371 kubelet[723]: E1018 12:19:15.089504     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-24czp_kubernetes-dashboard(a954deab-5a8a-4354-9e53-7ac4c92d040f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp" podUID="a954deab-5a8a-4354-9e53-7ac4c92d040f"
	Oct 18 12:19:26 embed-certs-175371 kubelet[723]: I1018 12:19:26.970952     723 scope.go:117] "RemoveContainer" containerID="a405ad4e1a98a18fc499624c47306f6d1cc7a55bbfa44133264e1b27d5551889"
	Oct 18 12:19:26 embed-certs-175371 kubelet[723]: E1018 12:19:26.971196     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-24czp_kubernetes-dashboard(a954deab-5a8a-4354-9e53-7ac4c92d040f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-24czp" podUID="a954deab-5a8a-4354-9e53-7ac4c92d040f"
	Oct 18 12:19:28 embed-certs-175371 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 12:19:28 embed-certs-175371 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 12:19:28 embed-certs-175371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 12:19:28 embed-certs-175371 systemd[1]: kubelet.service: Consumed 1.653s CPU time.
	
	
	==> kubernetes-dashboard [cb1a3164b004db279fa65be1382cd2de2087a29d8a9572c7d9390b8435ece780] <==
	2025/10/18 12:18:51 Starting overwatch
	2025/10/18 12:18:51 Using namespace: kubernetes-dashboard
	2025/10/18 12:18:51 Using in-cluster config to connect to apiserver
	2025/10/18 12:18:51 Using secret token for csrf signing
	2025/10/18 12:18:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 12:18:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 12:18:51 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 12:18:51 Generating JWE encryption key
	2025/10/18 12:18:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 12:18:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 12:18:51 Initializing JWE encryption key from synchronized object
	2025/10/18 12:18:51 Creating in-cluster Sidecar client
	2025/10/18 12:18:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 12:18:51 Serving insecurely on HTTP port: 9090
	2025/10/18 12:19:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5617debabda54b03bff0f372472919af6a9bb3bbcbc514242b26a2064697ae59] <==
	I1018 12:19:13.144449       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 12:19:13.153615       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 12:19:13.153676       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 12:19:13.155935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:16.610476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:20.874272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:24.473048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:27.526882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:30.548943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:30.553781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:19:30.553974       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 12:19:30.554115       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5075b3f2-7e93-4c37-98dd-c9faa2e4aa50", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-175371_9e8dd8a0-c67c-4765-8889-3b4c8f207b6f became leader
	I1018 12:19:30.554161       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-175371_9e8dd8a0-c67c-4765-8889-3b4c8f207b6f!
	W1018 12:19:30.555966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:30.558837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:19:30.655189       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-175371_9e8dd8a0-c67c-4765-8889-3b4c8f207b6f!
	
	
	==> storage-provisioner [ef18b0bcad14e848b1c27658083f65d022651b906dddfc0ef264638b57310d83] <==
	I1018 12:18:42.335970       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 12:19:12.338133       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-175371 -n embed-certs-175371
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-175371 -n embed-certs-175371: exit status 2 (312.02493ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-175371 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-579606 --alsologtostderr -v=1
E1018 12:19:48.252426    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/auto-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-579606 --alsologtostderr -v=1: exit status 80 (2.280379576s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-579606 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:19:47.049118  337066 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:19:47.049389  337066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:47.049398  337066 out.go:374] Setting ErrFile to fd 2...
	I1018 12:19:47.049402  337066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:47.049592  337066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:19:47.049871  337066 out.go:368] Setting JSON to false
	I1018 12:19:47.049909  337066 mustload.go:65] Loading cluster: newest-cni-579606
	I1018 12:19:47.050273  337066 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:47.050661  337066 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:47.069728  337066 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:47.070029  337066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:19:47.128318  337066 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-18 12:19:47.118271548 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:19:47.129062  337066 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-579606 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 12:19:47.131520  337066 out.go:179] * Pausing node newest-cni-579606 ... 
	I1018 12:19:47.133002  337066 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:47.133287  337066 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:47.133323  337066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:47.151605  337066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:47.247335  337066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:19:47.261584  337066 pause.go:52] kubelet running: true
	I1018 12:19:47.261644  337066 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:19:47.398451  337066 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:19:47.398567  337066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:19:47.467820  337066 cri.go:89] found id: "f77ce49aa964ce8c11b798ebb5a3965e54e02acb5fb351ec42a7874232b68f06"
	I1018 12:19:47.467848  337066 cri.go:89] found id: "b014e2d1379a4cbaea0d383d7a9062226eff1bd74baf23d918d241a37d506967"
	I1018 12:19:47.467854  337066 cri.go:89] found id: "53995b4d27c7ed8d1750a76428d42e3482e82b66648b564a8449012550c4dd21"
	I1018 12:19:47.467859  337066 cri.go:89] found id: "65e093865c154edbace2f9e377b1409b613c3dd057053e8b0d41c52ff85581f9"
	I1018 12:19:47.467862  337066 cri.go:89] found id: "3c70d0ad55b06bcec8f4631eccdcc42b9ffd4b815eb4f4b70fdbbfd7d1551822"
	I1018 12:19:47.467871  337066 cri.go:89] found id: "a98f4916acefd406445cdb9712752ed056428cdaa724922263c4b9e6f4e91287"
	I1018 12:19:47.467874  337066 cri.go:89] found id: ""
	I1018 12:19:47.467927  337066 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:47.479818  337066 retry.go:31] will retry after 175.634354ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:47Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:19:47.656288  337066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:19:47.669572  337066 pause.go:52] kubelet running: false
	I1018 12:19:47.669624  337066 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:19:47.781978  337066 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:19:47.782071  337066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:19:47.851354  337066 cri.go:89] found id: "f77ce49aa964ce8c11b798ebb5a3965e54e02acb5fb351ec42a7874232b68f06"
	I1018 12:19:47.851380  337066 cri.go:89] found id: "b014e2d1379a4cbaea0d383d7a9062226eff1bd74baf23d918d241a37d506967"
	I1018 12:19:47.851385  337066 cri.go:89] found id: "53995b4d27c7ed8d1750a76428d42e3482e82b66648b564a8449012550c4dd21"
	I1018 12:19:47.851389  337066 cri.go:89] found id: "65e093865c154edbace2f9e377b1409b613c3dd057053e8b0d41c52ff85581f9"
	I1018 12:19:47.851393  337066 cri.go:89] found id: "3c70d0ad55b06bcec8f4631eccdcc42b9ffd4b815eb4f4b70fdbbfd7d1551822"
	I1018 12:19:47.851397  337066 cri.go:89] found id: "a98f4916acefd406445cdb9712752ed056428cdaa724922263c4b9e6f4e91287"
	I1018 12:19:47.851401  337066 cri.go:89] found id: ""
	I1018 12:19:47.851446  337066 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:47.863300  337066 retry.go:31] will retry after 338.769005ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:47Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:19:48.202894  337066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:19:48.216027  337066 pause.go:52] kubelet running: false
	I1018 12:19:48.216108  337066 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:19:48.329219  337066 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:19:48.329304  337066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:19:48.397073  337066 cri.go:89] found id: "f77ce49aa964ce8c11b798ebb5a3965e54e02acb5fb351ec42a7874232b68f06"
	I1018 12:19:48.397098  337066 cri.go:89] found id: "b014e2d1379a4cbaea0d383d7a9062226eff1bd74baf23d918d241a37d506967"
	I1018 12:19:48.397103  337066 cri.go:89] found id: "53995b4d27c7ed8d1750a76428d42e3482e82b66648b564a8449012550c4dd21"
	I1018 12:19:48.397106  337066 cri.go:89] found id: "65e093865c154edbace2f9e377b1409b613c3dd057053e8b0d41c52ff85581f9"
	I1018 12:19:48.397109  337066 cri.go:89] found id: "3c70d0ad55b06bcec8f4631eccdcc42b9ffd4b815eb4f4b70fdbbfd7d1551822"
	I1018 12:19:48.397112  337066 cri.go:89] found id: "a98f4916acefd406445cdb9712752ed056428cdaa724922263c4b9e6f4e91287"
	I1018 12:19:48.397115  337066 cri.go:89] found id: ""
	I1018 12:19:48.397159  337066 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:48.409440  337066 retry.go:31] will retry after 654.277597ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:48Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:19:49.064303  337066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:19:49.077514  337066 pause.go:52] kubelet running: false
	I1018 12:19:49.077575  337066 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 12:19:49.189687  337066 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 12:19:49.189787  337066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 12:19:49.256646  337066 cri.go:89] found id: "f77ce49aa964ce8c11b798ebb5a3965e54e02acb5fb351ec42a7874232b68f06"
	I1018 12:19:49.256666  337066 cri.go:89] found id: "b014e2d1379a4cbaea0d383d7a9062226eff1bd74baf23d918d241a37d506967"
	I1018 12:19:49.256670  337066 cri.go:89] found id: "53995b4d27c7ed8d1750a76428d42e3482e82b66648b564a8449012550c4dd21"
	I1018 12:19:49.256673  337066 cri.go:89] found id: "65e093865c154edbace2f9e377b1409b613c3dd057053e8b0d41c52ff85581f9"
	I1018 12:19:49.256675  337066 cri.go:89] found id: "3c70d0ad55b06bcec8f4631eccdcc42b9ffd4b815eb4f4b70fdbbfd7d1551822"
	I1018 12:19:49.256678  337066 cri.go:89] found id: "a98f4916acefd406445cdb9712752ed056428cdaa724922263c4b9e6f4e91287"
	I1018 12:19:49.256680  337066 cri.go:89] found id: ""
	I1018 12:19:49.256714  337066 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:49.271193  337066 out.go:203] 
	W1018 12:19:49.272632  337066 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:19:49.272654  337066 out.go:285] * 
	* 
	W1018 12:19:49.276900  337066 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:19:49.278130  337066 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-579606 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-579606
helpers_test.go:243: (dbg) docker inspect newest-cni-579606:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681",
	        "Created": "2025-10-18T12:19:00.208907647Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335274,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:19:36.371587265Z",
	            "FinishedAt": "2025-10-18T12:19:35.392745108Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681/hostname",
	        "HostsPath": "/var/lib/docker/containers/641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681/hosts",
	        "LogPath": "/var/lib/docker/containers/641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681/641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681-json.log",
	        "Name": "/newest-cni-579606",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-579606:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-579606",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681",
	                "LowerDir": "/var/lib/docker/overlay2/ae8b372d5d03b5e68857f1e6e0aaeffa62edde2d277675d121e64bd92805a717-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae8b372d5d03b5e68857f1e6e0aaeffa62edde2d277675d121e64bd92805a717/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae8b372d5d03b5e68857f1e6e0aaeffa62edde2d277675d121e64bd92805a717/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae8b372d5d03b5e68857f1e6e0aaeffa62edde2d277675d121e64bd92805a717/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-579606",
	                "Source": "/var/lib/docker/volumes/newest-cni-579606/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-579606",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-579606",
	                "name.minikube.sigs.k8s.io": "newest-cni-579606",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e63077d99c6156c180490b2446125b6c6bde4bf1b53a8574295f05935690fce",
	            "SandboxKey": "/var/run/docker/netns/1e63077d99c6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-579606": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:c5:38:18:07:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7f1c73ac1e12d550471cb62895be2add81ac8cf17de04960f0eacccc32c8d7ed",
	                    "EndpointID": "8a838023b9728c6ddb19ab298ea04b08bbc92e5f9a6d0fd03458d2e7e897eeff",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-579606",
	                        "641d4379c21a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-579606 -n newest-cni-579606
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-579606 -n newest-cni-579606: exit status 2 (313.160436ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-579606 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-175371 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p embed-certs-175371 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ image   │ no-preload-406541 image list --format=json                                                                                                                                                                                                    │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p no-preload-406541 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ image   │ old-k8s-version-024443 image list --format=json                                                                                                                                                                                               │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p old-k8s-version-024443 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ delete  │ -p no-preload-406541                                                                                                                                                                                                                          │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ delete  │ -p old-k8s-version-024443                                                                                                                                                                                                                     │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ delete  │ -p old-k8s-version-024443                                                                                                                                                                                                                     │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p newest-cni-579606 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ delete  │ -p no-preload-406541                                                                                                                                                                                                                          │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ default-k8s-diff-port-028309 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ pause   │ -p default-k8s-diff-port-028309 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-579606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ stop    │ -p newest-cni-579606 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ delete  │ -p default-k8s-diff-port-028309                                                                                                                                                                                                               │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ image   │ embed-certs-175371 image list --format=json                                                                                                                                                                                                   │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ pause   │ -p embed-certs-175371 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-028309                                                                                                                                                                                                               │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ delete  │ -p embed-certs-175371                                                                                                                                                                                                                         │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ delete  │ -p embed-certs-175371                                                                                                                                                                                                                         │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-579606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ start   │ -p newest-cni-579606 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ image   │ newest-cni-579606 image list --format=json                                                                                                                                                                                                    │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ pause   │ -p newest-cni-579606 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:19:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:19:36.137368  335075 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:19:36.137645  335075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:36.137657  335075 out.go:374] Setting ErrFile to fd 2...
	I1018 12:19:36.137664  335075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:36.137888  335075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:19:36.138388  335075 out.go:368] Setting JSON to false
	I1018 12:19:36.139434  335075 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3724,"bootTime":1760786252,"procs":283,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:19:36.139534  335075 start.go:141] virtualization: kvm guest
	I1018 12:19:36.141714  335075 out.go:179] * [newest-cni-579606] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:19:36.143243  335075 notify.go:220] Checking for updates...
	I1018 12:19:36.143289  335075 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:19:36.144910  335075 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:19:36.146574  335075 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:19:36.148070  335075 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:19:36.149395  335075 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:19:36.150771  335075 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:19:36.152502  335075 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:36.152934  335075 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:19:36.176992  335075 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:19:36.177143  335075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:19:36.233999  335075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-18 12:19:36.222342082 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:19:36.234144  335075 docker.go:318] overlay module found
	I1018 12:19:36.236207  335075 out.go:179] * Using the docker driver based on existing profile
	I1018 12:19:36.237645  335075 start.go:305] selected driver: docker
	I1018 12:19:36.237662  335075 start.go:925] validating driver "docker" against &{Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:19:36.237783  335075 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:19:36.238367  335075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:19:36.294808  335075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-18 12:19:36.284719824 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:19:36.295164  335075 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 12:19:36.295194  335075 cni.go:84] Creating CNI manager for ""
	I1018 12:19:36.295252  335075 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:19:36.295299  335075 start.go:349] cluster config:
	{Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:19:36.297532  335075 out.go:179] * Starting "newest-cni-579606" primary control-plane node in "newest-cni-579606" cluster
	I1018 12:19:36.299258  335075 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:19:36.300692  335075 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:19:36.301848  335075 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:19:36.301893  335075 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 12:19:36.301895  335075 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:19:36.301906  335075 cache.go:58] Caching tarball of preloaded images
	I1018 12:19:36.302098  335075 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:19:36.302112  335075 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:19:36.302204  335075 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json ...
	I1018 12:19:36.324652  335075 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:19:36.324678  335075 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:19:36.324701  335075 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:19:36.324743  335075 start.go:360] acquireMachinesLock for newest-cni-579606: {Name:mk4161cf0bf2eb93a8110dc388332ec9ca8fc5ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:19:36.324830  335075 start.go:364] duration metric: took 51.443µs to acquireMachinesLock for "newest-cni-579606"
	I1018 12:19:36.324854  335075 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:19:36.324864  335075 fix.go:54] fixHost starting: 
	I1018 12:19:36.325094  335075 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:36.342982  335075 fix.go:112] recreateIfNeeded on newest-cni-579606: state=Stopped err=<nil>
	W1018 12:19:36.343024  335075 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:19:36.345208  335075 out.go:252] * Restarting existing docker container for "newest-cni-579606" ...
	I1018 12:19:36.345312  335075 cli_runner.go:164] Run: docker start newest-cni-579606
	I1018 12:19:36.594314  335075 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:36.613801  335075 kic.go:430] container "newest-cni-579606" state is running.
	I1018 12:19:36.614215  335075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:36.633841  335075 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json ...
	I1018 12:19:36.634099  335075 machine.go:93] provisionDockerMachine start ...
	I1018 12:19:36.634191  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:36.654222  335075 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:36.654471  335075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 12:19:36.654487  335075 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:19:36.655110  335075 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53330->127.0.0.1:33133: read: connection reset by peer
	I1018 12:19:39.790204  335075 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-579606
	
	I1018 12:19:39.790236  335075 ubuntu.go:182] provisioning hostname "newest-cni-579606"
	I1018 12:19:39.790300  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:39.809358  335075 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:39.809574  335075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 12:19:39.809591  335075 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-579606 && echo "newest-cni-579606" | sudo tee /etc/hostname
	I1018 12:19:39.952255  335075 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-579606
	
	I1018 12:19:39.952342  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:39.970495  335075 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:39.970743  335075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 12:19:39.970776  335075 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-579606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-579606/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-579606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:19:40.103918  335075 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:19:40.103950  335075 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:19:40.104005  335075 ubuntu.go:190] setting up certificates
	I1018 12:19:40.104022  335075 provision.go:84] configureAuth start
	I1018 12:19:40.104077  335075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:40.123311  335075 provision.go:143] copyHostCerts
	I1018 12:19:40.123388  335075 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:19:40.123413  335075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:19:40.123496  335075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:19:40.123747  335075 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:19:40.123785  335075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:19:40.123842  335075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:19:40.123952  335075 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:19:40.123965  335075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:19:40.124031  335075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:19:40.124134  335075 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.newest-cni-579606 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-579606]
	I1018 12:19:40.379660  335075 provision.go:177] copyRemoteCerts
	I1018 12:19:40.379724  335075 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:19:40.379768  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:40.398109  335075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:40.497321  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:19:40.515000  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 12:19:40.532198  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:19:40.549409  335075 provision.go:87] duration metric: took 445.372225ms to configureAuth
	I1018 12:19:40.549443  335075 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:19:40.549604  335075 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:40.549688  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:40.568011  335075 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:40.568277  335075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 12:19:40.568294  335075 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:19:40.831510  335075 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:19:40.831535  335075 machine.go:96] duration metric: took 4.197417627s to provisionDockerMachine
	I1018 12:19:40.831547  335075 start.go:293] postStartSetup for "newest-cni-579606" (driver="docker")
	I1018 12:19:40.831560  335075 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:19:40.831617  335075 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:19:40.831684  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:40.850007  335075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:40.946361  335075 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:19:40.949946  335075 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:19:40.949977  335075 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:19:40.949988  335075 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:19:40.950043  335075 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:19:40.950123  335075 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:19:40.950219  335075 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:19:40.957723  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:19:40.974965  335075 start.go:296] duration metric: took 143.401884ms for postStartSetup
	I1018 12:19:40.975058  335075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:19:40.975103  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:40.993512  335075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:41.087262  335075 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:19:41.092104  335075 fix.go:56] duration metric: took 4.767233113s for fixHost
	I1018 12:19:41.092134  335075 start.go:83] releasing machines lock for "newest-cni-579606", held for 4.767291003s
	I1018 12:19:41.092204  335075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:41.110754  335075 ssh_runner.go:195] Run: cat /version.json
	I1018 12:19:41.110818  335075 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:19:41.110835  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:41.110915  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:41.130109  335075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:41.130319  335075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:41.277617  335075 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:41.284424  335075 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:19:41.321160  335075 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:19:41.326237  335075 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:19:41.326321  335075 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:19:41.335085  335075 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:19:41.335111  335075 start.go:495] detecting cgroup driver to use...
	I1018 12:19:41.335142  335075 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:19:41.335189  335075 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:19:41.350564  335075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:19:41.363255  335075 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:19:41.363325  335075 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:19:41.378641  335075 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:19:41.391318  335075 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:19:41.472724  335075 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:19:41.553719  335075 docker.go:234] disabling docker service ...
	I1018 12:19:41.553812  335075 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:19:41.567833  335075 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:19:41.579981  335075 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:19:41.660366  335075 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:19:41.737906  335075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:19:41.751046  335075 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:19:41.766637  335075 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:19:41.766704  335075 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:41.775840  335075 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:19:41.775908  335075 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:41.784549  335075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:41.793137  335075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:41.802070  335075 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:19:41.810220  335075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:41.819325  335075 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:41.827701  335075 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:41.836535  335075 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:19:41.844196  335075 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:19:41.851604  335075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:41.931321  335075 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:19:42.037855  335075 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:19:42.037929  335075 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:19:42.041913  335075 start.go:563] Will wait 60s for crictl version
	I1018 12:19:42.041961  335075 ssh_runner.go:195] Run: which crictl
	I1018 12:19:42.045709  335075 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:19:42.071835  335075 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:19:42.071905  335075 ssh_runner.go:195] Run: crio --version
	I1018 12:19:42.099342  335075 ssh_runner.go:195] Run: crio --version
	I1018 12:19:42.130292  335075 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:19:42.131848  335075 cli_runner.go:164] Run: docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:19:42.149905  335075 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 12:19:42.153969  335075 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:19:42.166256  335075 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 12:19:42.167500  335075 kubeadm.go:883] updating cluster {Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:19:42.167619  335075 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:19:42.167679  335075 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:19:42.199119  335075 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:19:42.199141  335075 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:19:42.199187  335075 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:19:42.225021  335075 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:19:42.225043  335075 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:19:42.225051  335075 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 12:19:42.225165  335075 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-579606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:19:42.225227  335075 ssh_runner.go:195] Run: crio config
	I1018 12:19:42.272539  335075 cni.go:84] Creating CNI manager for ""
	I1018 12:19:42.272558  335075 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:19:42.272571  335075 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 12:19:42.272595  335075 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-579606 NodeName:newest-cni-579606 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:19:42.272746  335075 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-579606"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:19:42.272834  335075 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:19:42.281290  335075 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:19:42.281357  335075 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:19:42.289421  335075 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 12:19:42.302598  335075 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:19:42.316177  335075 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1018 12:19:42.329352  335075 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:19:42.333314  335075 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:19:42.343843  335075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:42.421738  335075 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:19:42.442404  335075 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606 for IP: 192.168.85.2
	I1018 12:19:42.442426  335075 certs.go:195] generating shared ca certs ...
	I1018 12:19:42.442445  335075 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:42.442689  335075 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:19:42.442753  335075 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:19:42.442788  335075 certs.go:257] generating profile certs ...
	I1018 12:19:42.442889  335075 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key
	I1018 12:19:42.442966  335075 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad
	I1018 12:19:42.443003  335075 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key
	I1018 12:19:42.443121  335075 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem (1338 bytes)
	W1018 12:19:42.443154  335075 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360_empty.pem, impossibly tiny 0 bytes
	I1018 12:19:42.443164  335075 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:19:42.443191  335075 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:19:42.443213  335075 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:19:42.443235  335075 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 12:19:42.443271  335075 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:19:42.443855  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:19:42.463239  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:19:42.483034  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:19:42.503605  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:19:42.528923  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 12:19:42.547339  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:19:42.564875  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:19:42.581997  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:19:42.599183  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem --> /usr/share/ca-certificates/9360.pem (1338 bytes)
	I1018 12:19:42.616574  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /usr/share/ca-certificates/93602.pem (1708 bytes)
	I1018 12:19:42.634715  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:19:42.653018  335075 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:19:42.665386  335075 ssh_runner.go:195] Run: openssl version
	I1018 12:19:42.671433  335075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:19:42.680058  335075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:42.683873  335075 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:42.683934  335075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:42.717886  335075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:19:42.726591  335075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9360.pem && ln -fs /usr/share/ca-certificates/9360.pem /etc/ssl/certs/9360.pem"
	I1018 12:19:42.735540  335075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9360.pem
	I1018 12:19:42.739669  335075 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9360.pem
	I1018 12:19:42.739729  335075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9360.pem
	I1018 12:19:42.774178  335075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9360.pem /etc/ssl/certs/51391683.0"
	I1018 12:19:42.782583  335075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93602.pem && ln -fs /usr/share/ca-certificates/93602.pem /etc/ssl/certs/93602.pem"
	I1018 12:19:42.791202  335075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93602.pem
	I1018 12:19:42.795126  335075 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/93602.pem
	I1018 12:19:42.795182  335075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93602.pem
	I1018 12:19:42.830258  335075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93602.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:19:42.838984  335075 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:19:42.842982  335075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:19:42.878568  335075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:19:42.913101  335075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:19:42.949213  335075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:19:42.997164  335075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:19:43.046288  335075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:19:43.096108  335075 kubeadm.go:400] StartCluster: {Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:19:43.096218  335075 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:43.096308  335075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:43.128660  335075 cri.go:89] found id: "53995b4d27c7ed8d1750a76428d42e3482e82b66648b564a8449012550c4dd21"
	I1018 12:19:43.128689  335075 cri.go:89] found id: "65e093865c154edbace2f9e377b1409b613c3dd057053e8b0d41c52ff85581f9"
	I1018 12:19:43.128695  335075 cri.go:89] found id: "3c70d0ad55b06bcec8f4631eccdcc42b9ffd4b815eb4f4b70fdbbfd7d1551822"
	I1018 12:19:43.128700  335075 cri.go:89] found id: "a98f4916acefd406445cdb9712752ed056428cdaa724922263c4b9e6f4e91287"
	I1018 12:19:43.128704  335075 cri.go:89] found id: ""
	I1018 12:19:43.128750  335075 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 12:19:43.140820  335075 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:43Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:19:43.140912  335075 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:19:43.148919  335075 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 12:19:43.148942  335075 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 12:19:43.149032  335075 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 12:19:43.156835  335075 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:19:43.157233  335075 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-579606" does not appear in /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:19:43.157325  335075 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-5865/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-579606" cluster setting kubeconfig missing "newest-cni-579606" context setting]
	I1018 12:19:43.157644  335075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:43.158908  335075 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 12:19:43.167198  335075 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 12:19:43.167239  335075 kubeadm.go:601] duration metric: took 18.284745ms to restartPrimaryControlPlane
	I1018 12:19:43.167250  335075 kubeadm.go:402] duration metric: took 71.151656ms to StartCluster
	I1018 12:19:43.167268  335075 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:43.167347  335075 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:19:43.168095  335075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:43.168356  335075 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:19:43.168424  335075 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:19:43.168533  335075 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-579606"
	I1018 12:19:43.168554  335075 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-579606"
	W1018 12:19:43.168566  335075 addons.go:247] addon storage-provisioner should already be in state true
	I1018 12:19:43.168572  335075 addons.go:69] Setting dashboard=true in profile "newest-cni-579606"
	I1018 12:19:43.168597  335075 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:43.168599  335075 addons.go:238] Setting addon dashboard=true in "newest-cni-579606"
	W1018 12:19:43.168608  335075 addons.go:247] addon dashboard should already be in state true
	I1018 12:19:43.168617  335075 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:43.168641  335075 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:43.168663  335075 addons.go:69] Setting default-storageclass=true in profile "newest-cni-579606"
	I1018 12:19:43.168676  335075 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-579606"
	I1018 12:19:43.168954  335075 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:43.169093  335075 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:43.169124  335075 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:43.171146  335075 out.go:179] * Verifying Kubernetes components...
	I1018 12:19:43.172605  335075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:43.195595  335075 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 12:19:43.196886  335075 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 12:19:43.198141  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 12:19:43.198165  335075 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 12:19:43.198143  335075 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:19:43.198243  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:43.198458  335075 addons.go:238] Setting addon default-storageclass=true in "newest-cni-579606"
	W1018 12:19:43.198483  335075 addons.go:247] addon default-storageclass should already be in state true
	I1018 12:19:43.198516  335075 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:43.198930  335075 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:43.204443  335075 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:19:43.204465  335075 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:19:43.204519  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:43.230773  335075 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:19:43.230850  335075 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:19:43.230942  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:43.231297  335075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:43.238172  335075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:43.253859  335075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:43.311743  335075 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:19:43.325144  335075 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:19:43.325239  335075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:19:43.338124  335075 api_server.go:72] duration metric: took 169.733551ms to wait for apiserver process to appear ...
	I1018 12:19:43.338159  335075 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:19:43.338179  335075 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:43.344910  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 12:19:43.344935  335075 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 12:19:43.351039  335075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:19:43.360647  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 12:19:43.360672  335075 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 12:19:43.366194  335075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:19:43.376227  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 12:19:43.376253  335075 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 12:19:43.391550  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 12:19:43.391575  335075 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 12:19:43.405706  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 12:19:43.405787  335075 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 12:19:43.420685  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 12:19:43.420717  335075 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 12:19:43.436142  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 12:19:43.436169  335075 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 12:19:43.449040  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 12:19:43.449067  335075 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 12:19:43.461318  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:19:43.461339  335075 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 12:19:43.473499  335075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:19:44.682167  335075 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 12:19:44.682195  335075 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 12:19:44.682209  335075 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:44.723269  335075 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 12:19:44.723304  335075 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 12:19:44.838408  335075 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:44.844293  335075 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:19:44.844335  335075 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:19:45.216718  335075 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.865639185s)
	I1018 12:19:45.216789  335075 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.850564284s)
	I1018 12:19:45.216936  335075 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.74339849s)
	I1018 12:19:45.218674  335075 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-579606 addons enable metrics-server
	
	I1018 12:19:45.228292  335075 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 12:19:45.229793  335075 addons.go:514] duration metric: took 2.061377114s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 12:19:45.339263  335075 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:45.343421  335075 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:19:45.343468  335075 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:19:45.838941  335075 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:45.843542  335075 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:19:45.843580  335075 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:19:46.338393  335075 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:46.342980  335075 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 12:19:46.344478  335075 api_server.go:141] control plane version: v1.34.1
	I1018 12:19:46.344503  335075 api_server.go:131] duration metric: took 3.006338044s to wait for apiserver health ...
	I1018 12:19:46.344512  335075 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:19:46.348611  335075 system_pods.go:59] 8 kube-system pods found
	I1018 12:19:46.348643  335075 system_pods.go:61] "coredns-66bc5c9577-p6bts" [49609244-6dc2-4950-8fad-8240b827ecca] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 12:19:46.348652  335075 system_pods.go:61] "etcd-newest-cni-579606" [496c00b4-7ad1-40c0-a440-c396a752cbf4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:19:46.348661  335075 system_pods.go:61] "kindnet-2c4t6" [08c0018d-0f0f-435e-8868-31818d5639fa] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:19:46.348668  335075 system_pods.go:61] "kube-apiserver-newest-cni-579606" [a39961c7-019e-41ec-8843-e98e9c2e3604] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:19:46.348674  335075 system_pods.go:61] "kube-controller-manager-newest-cni-579606" [992bd82d-6489-43da-83ba-8dcb6b86fe48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:19:46.348682  335075 system_pods.go:61] "kube-proxy-5hjgn" [915df613-23ce-49e2-b125-d223024077b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:19:46.348687  335075 system_pods.go:61] "kube-scheduler-newest-cni-579606" [2a1de39e-4fa6-49e8-a420-75a6c82ac73e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:19:46.348702  335075 system_pods.go:61] "storage-provisioner" [c7ff4c04-56e5-469b-9af2-dc1bf4fe969d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 12:19:46.348708  335075 system_pods.go:74] duration metric: took 4.191579ms to wait for pod list to return data ...
	I1018 12:19:46.348717  335075 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:19:46.351336  335075 default_sa.go:45] found service account: "default"
	I1018 12:19:46.351359  335075 default_sa.go:55] duration metric: took 2.63432ms for default service account to be created ...
	I1018 12:19:46.351371  335075 kubeadm.go:586] duration metric: took 3.182987363s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 12:19:46.351388  335075 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:19:46.354183  335075 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:19:46.354209  335075 node_conditions.go:123] node cpu capacity is 8
	I1018 12:19:46.354223  335075 node_conditions.go:105] duration metric: took 2.830056ms to run NodePressure ...
	I1018 12:19:46.354236  335075 start.go:241] waiting for startup goroutines ...
	I1018 12:19:46.354261  335075 start.go:246] waiting for cluster config update ...
	I1018 12:19:46.354280  335075 start.go:255] writing updated cluster config ...
	I1018 12:19:46.354652  335075 ssh_runner.go:195] Run: rm -f paused
	I1018 12:19:46.404669  335075 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:46.407603  335075 out.go:179] * Done! kubectl is now configured to use "newest-cni-579606" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.817935912Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.820946927Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7c63f3f5-72e3-46ab-bed7-a491e11d40b0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.821705022Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=724d4e8f-4a78-43c3-83f0-3268f46f18c7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.822674352Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.823405859Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.823480394Z" level=info msg="Ran pod sandbox b90a998c71672440b7bf6a661a14abdf03d86b1f8701b7dca5efffd667de4b46 with infra container: kube-system/kube-proxy-5hjgn/POD" id=7c63f3f5-72e3-46ab-bed7-a491e11d40b0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.82433523Z" level=info msg="Ran pod sandbox 464d103065151409ad9ab31e667d4287a1dd1d8eb263b49bd4de2e487954f411 with infra container: kube-system/kindnet-2c4t6/POD" id=724d4e8f-4a78-43c3-83f0-3268f46f18c7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.824945573Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9e552546-9b96-4070-8feb-ae29b0afe460 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.825551751Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=532c83b3-a5be-4d4e-af78-732fbc72e2e7 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.825946247Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=c2949ee0-20b6-4130-a93f-708817c8bdda name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.826481683Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=96c5cf27-248a-4011-ac8d-a65c370189ca name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.826980603Z" level=info msg="Creating container: kube-system/kube-proxy-5hjgn/kube-proxy" id=d9c3d031-d806-4f0f-b803-b09dbab7ec08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.827251157Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.828383346Z" level=info msg="Creating container: kube-system/kindnet-2c4t6/kindnet-cni" id=3b0038ce-c7a1-48f6-a2ee-c85ef5bb4c8a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.829856091Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.832806845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.833479835Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.835294151Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.836365205Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.860634567Z" level=info msg="Created container f77ce49aa964ce8c11b798ebb5a3965e54e02acb5fb351ec42a7874232b68f06: kube-system/kindnet-2c4t6/kindnet-cni" id=3b0038ce-c7a1-48f6-a2ee-c85ef5bb4c8a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.861370228Z" level=info msg="Starting container: f77ce49aa964ce8c11b798ebb5a3965e54e02acb5fb351ec42a7874232b68f06" id=1f7287db-0def-4daa-b1fb-9d63cfe42467 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.863365802Z" level=info msg="Started container" PID=1039 containerID=f77ce49aa964ce8c11b798ebb5a3965e54e02acb5fb351ec42a7874232b68f06 description=kube-system/kindnet-2c4t6/kindnet-cni id=1f7287db-0def-4daa-b1fb-9d63cfe42467 name=/runtime.v1.RuntimeService/StartContainer sandboxID=464d103065151409ad9ab31e667d4287a1dd1d8eb263b49bd4de2e487954f411
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.866371547Z" level=info msg="Created container b014e2d1379a4cbaea0d383d7a9062226eff1bd74baf23d918d241a37d506967: kube-system/kube-proxy-5hjgn/kube-proxy" id=d9c3d031-d806-4f0f-b803-b09dbab7ec08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.867039133Z" level=info msg="Starting container: b014e2d1379a4cbaea0d383d7a9062226eff1bd74baf23d918d241a37d506967" id=ef4c25c6-7d40-4c29-8983-d2354e6c0899 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.86983067Z" level=info msg="Started container" PID=1040 containerID=b014e2d1379a4cbaea0d383d7a9062226eff1bd74baf23d918d241a37d506967 description=kube-system/kube-proxy-5hjgn/kube-proxy id=ef4c25c6-7d40-4c29-8983-d2354e6c0899 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b90a998c71672440b7bf6a661a14abdf03d86b1f8701b7dca5efffd667de4b46
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f77ce49aa964c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   464d103065151       kindnet-2c4t6                               kube-system
	b014e2d1379a4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   b90a998c71672       kube-proxy-5hjgn                            kube-system
	53995b4d27c7e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   a79a7939a351a       etcd-newest-cni-579606                      kube-system
	65e093865c154       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   ef32e3abb377d       kube-controller-manager-newest-cni-579606   kube-system
	3c70d0ad55b06       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   367a6f7bfe8bc       kube-scheduler-newest-cni-579606            kube-system
	a98f4916acefd       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   32c85241bce3f       kube-apiserver-newest-cni-579606            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-579606
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-579606
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=newest-cni-579606
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_19_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:19:12 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-579606
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:19:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:19:44 +0000   Sat, 18 Oct 2025 12:19:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:19:44 +0000   Sat, 18 Oct 2025 12:19:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:19:44 +0000   Sat, 18 Oct 2025 12:19:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 12:19:44 +0000   Sat, 18 Oct 2025 12:19:10 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-579606
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                36059274-aa96-46ac-88d0-180e17b44739
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-579606                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-2c4t6                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-newest-cni-579606             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-579606    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-5hjgn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-newest-cni-579606             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s (x8 over 41s)  kubelet          Node newest-cni-579606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x8 over 41s)  kubelet          Node newest-cni-579606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x8 over 41s)  kubelet          Node newest-cni-579606 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    35s                kubelet          Node newest-cni-579606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  35s                kubelet          Node newest-cni-579606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     35s                kubelet          Node newest-cni-579606 status is now: NodeHasSufficientPID
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           31s                node-controller  Node newest-cni-579606 event: Registered Node newest-cni-579606 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x4 over 8s)    kubelet          Node newest-cni-579606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x4 over 8s)    kubelet          Node newest-cni-579606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x4 over 8s)    kubelet          Node newest-cni-579606 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s                 node-controller  Node newest-cni-579606 event: Registered Node newest-cni-579606 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +11.948953] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 93 07 de 40 6d 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 2f a5 3a 37 fc 08 06
	[  +0.204454] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 4b 47 1f ce e5 08 06
	[Oct18 12:16] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 88 62 1b dd a7 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 f1 aa 42 b3 1d 08 06
	[  +0.000901] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +26.035563] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 9e 15 3f 0e e1 08 06
	[  +0.000631] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 55 46 ae a1 7f 08 06
	[  +2.492998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 63 10 7e 7b f1 08 06
	[  +0.001695] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	[ +18.118461] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e eb 77 72 c6 18 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	
	
	==> etcd [53995b4d27c7ed8d1750a76428d42e3482e82b66648b564a8449012550c4dd21] <==
	{"level":"warn","ts":"2025-10-18T12:19:44.087312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.093700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.100031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.108705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.115451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.121562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.135405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.141832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.148065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.154669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.161893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.168829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.175971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.182235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.194888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.201793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.208372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.214698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.221729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.228512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.234473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.246027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.252377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.258731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.306573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60712","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:19:50 up  1:02,  0 user,  load average: 2.85, 3.71, 2.59
	Linux newest-cni-579606 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f77ce49aa964ce8c11b798ebb5a3965e54e02acb5fb351ec42a7874232b68f06] <==
	I1018 12:19:46.059115       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:19:46.059394       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 12:19:46.059541       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:19:46.059556       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:19:46.059579       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:19:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:19:46.259877       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:19:46.357015       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:19:46.357041       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:19:46.357356       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:19:46.757381       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:19:46.757412       1 metrics.go:72] Registering metrics
	I1018 12:19:46.757494       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [a98f4916acefd406445cdb9712752ed056428cdaa724922263c4b9e6f4e91287] <==
	I1018 12:19:44.777858       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 12:19:44.778046       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 12:19:44.778124       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 12:19:44.778299       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 12:19:44.778536       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 12:19:44.777650       1 aggregator.go:171] initial CRD sync complete...
	I1018 12:19:44.778606       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 12:19:44.778613       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 12:19:44.778620       1 cache.go:39] Caches are synced for autoregister controller
	I1018 12:19:44.784090       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 12:19:44.789018       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 12:19:44.789058       1 policy_source.go:240] refreshing policies
	I1018 12:19:44.808383       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:19:45.024942       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:19:45.055312       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:19:45.077206       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:19:45.087113       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:19:45.094895       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:19:45.132554       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.206.156"}
	I1018 12:19:45.145168       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.75.222"}
	I1018 12:19:45.680946       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:19:48.457955       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:19:48.458003       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:19:48.507253       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:19:48.606003       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [65e093865c154edbace2f9e377b1409b613c3dd057053e8b0d41c52ff85581f9] <==
	I1018 12:19:48.084193       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 12:19:48.089604       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:19:48.093937       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 12:19:48.096259       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 12:19:48.098529       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 12:19:48.099750       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:19:48.099794       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:19:48.099852       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 12:19:48.103557       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 12:19:48.103585       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 12:19:48.103643       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 12:19:48.103655       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 12:19:48.103691       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 12:19:48.103714       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 12:19:48.103788       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 12:19:48.103877       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-579606"
	I1018 12:19:48.103950       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 12:19:48.104240       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 12:19:48.106096       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 12:19:48.109453       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:19:48.114835       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:19:48.116052       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:19:48.116074       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:19:48.116089       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:19:48.129351       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b014e2d1379a4cbaea0d383d7a9062226eff1bd74baf23d918d241a37d506967] <==
	I1018 12:19:45.905434       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:19:45.974668       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:19:46.075343       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:19:46.075391       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 12:19:46.075481       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:19:46.095432       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:19:46.095502       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:19:46.100821       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:19:46.101259       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:19:46.101281       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:19:46.102650       1 config.go:200] "Starting service config controller"
	I1018 12:19:46.102701       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:19:46.102776       1 config.go:309] "Starting node config controller"
	I1018 12:19:46.102791       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:19:46.102924       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:19:46.103346       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:19:46.103439       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:19:46.103811       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:19:46.203672       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:19:46.203700       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:19:46.203714       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:19:46.204842       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3c70d0ad55b06bcec8f4631eccdcc42b9ffd4b815eb4f4b70fdbbfd7d1551822] <==
	W1018 12:19:44.698574       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 12:19:44.698611       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 12:19:44.698623       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 12:19:44.698633       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 12:19:44.740085       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 12:19:44.740204       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:19:44.743329       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:19:44.743482       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:19:44.743499       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:19:44.743522       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:19:44.748860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:19:44.749151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:19:44.749303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:19:44.749465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:19:44.749791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:19:44.750478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:19:44.750803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:19:44.751494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:19:44.751648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:19:44.751955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:19:44.752285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:19:44.752642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:19:44.752747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:19:44.758505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1018 12:19:44.843654       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: E1018 12:19:44.548635     668 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-579606\" not found" node="newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: I1018 12:19:44.813191     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: I1018 12:19:44.813311     668 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: I1018 12:19:44.813405     668 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: I1018 12:19:44.813443     668 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: I1018 12:19:44.814321     668 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: E1018 12:19:44.825402     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-579606\" already exists" pod="kube-system/etcd-newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: I1018 12:19:44.825442     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: E1018 12:19:44.832320     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-579606\" already exists" pod="kube-system/kube-apiserver-newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: I1018 12:19:44.832354     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: E1018 12:19:44.839600     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-579606\" already exists" pod="kube-system/kube-controller-manager-newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: I1018 12:19:44.839636     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: E1018 12:19:44.846448     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-579606\" already exists" pod="kube-system/kube-scheduler-newest-cni-579606"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: I1018 12:19:45.508985     668 apiserver.go:52] "Watching apiserver"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: I1018 12:19:45.549634     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-579606"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: E1018 12:19:45.558962     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-579606\" already exists" pod="kube-system/kube-apiserver-newest-cni-579606"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: I1018 12:19:45.612076     668 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: I1018 12:19:45.619168     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/915df613-23ce-49e2-b125-d223024077b0-xtables-lock\") pod \"kube-proxy-5hjgn\" (UID: \"915df613-23ce-49e2-b125-d223024077b0\") " pod="kube-system/kube-proxy-5hjgn"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: I1018 12:19:45.619314     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/915df613-23ce-49e2-b125-d223024077b0-lib-modules\") pod \"kube-proxy-5hjgn\" (UID: \"915df613-23ce-49e2-b125-d223024077b0\") " pod="kube-system/kube-proxy-5hjgn"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: I1018 12:19:45.619356     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/08c0018d-0f0f-435e-8868-31818d5639fa-cni-cfg\") pod \"kindnet-2c4t6\" (UID: \"08c0018d-0f0f-435e-8868-31818d5639fa\") " pod="kube-system/kindnet-2c4t6"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: I1018 12:19:45.619421     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08c0018d-0f0f-435e-8868-31818d5639fa-xtables-lock\") pod \"kindnet-2c4t6\" (UID: \"08c0018d-0f0f-435e-8868-31818d5639fa\") " pod="kube-system/kindnet-2c4t6"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: I1018 12:19:45.619435     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08c0018d-0f0f-435e-8868-31818d5639fa-lib-modules\") pod \"kindnet-2c4t6\" (UID: \"08c0018d-0f0f-435e-8868-31818d5639fa\") " pod="kube-system/kindnet-2c4t6"
	Oct 18 12:19:47 newest-cni-579606 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 12:19:47 newest-cni-579606 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 12:19:47 newest-cni-579606 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-579606 -n newest-cni-579606
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-579606 -n newest-cni-579606: exit status 2 (314.551594ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-579606 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-p6bts storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m7ktk kubernetes-dashboard-855c9754f9-25499
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-579606 describe pod coredns-66bc5c9577-p6bts storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m7ktk kubernetes-dashboard-855c9754f9-25499
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-579606 describe pod coredns-66bc5c9577-p6bts storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m7ktk kubernetes-dashboard-855c9754f9-25499: exit status 1 (62.948452ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-p6bts" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-m7ktk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-25499" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-579606 describe pod coredns-66bc5c9577-p6bts storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m7ktk kubernetes-dashboard-855c9754f9-25499: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-579606
helpers_test.go:243: (dbg) docker inspect newest-cni-579606:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681",
	        "Created": "2025-10-18T12:19:00.208907647Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335274,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:19:36.371587265Z",
	            "FinishedAt": "2025-10-18T12:19:35.392745108Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681/hostname",
	        "HostsPath": "/var/lib/docker/containers/641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681/hosts",
	        "LogPath": "/var/lib/docker/containers/641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681/641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681-json.log",
	        "Name": "/newest-cni-579606",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-579606:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-579606",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "641d4379c21ad2fe11854554cb42ba808448fecd0bf4f9e762ea9f02b78a5681",
	                "LowerDir": "/var/lib/docker/overlay2/ae8b372d5d03b5e68857f1e6e0aaeffa62edde2d277675d121e64bd92805a717-init/diff:/var/lib/docker/overlay2/6fc8e312490bc09e2d54cd89f17bdec62d6bbbc819b4b0399340e505434e1533/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae8b372d5d03b5e68857f1e6e0aaeffa62edde2d277675d121e64bd92805a717/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae8b372d5d03b5e68857f1e6e0aaeffa62edde2d277675d121e64bd92805a717/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae8b372d5d03b5e68857f1e6e0aaeffa62edde2d277675d121e64bd92805a717/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-579606",
	                "Source": "/var/lib/docker/volumes/newest-cni-579606/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-579606",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-579606",
	                "name.minikube.sigs.k8s.io": "newest-cni-579606",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e63077d99c6156c180490b2446125b6c6bde4bf1b53a8574295f05935690fce",
	            "SandboxKey": "/var/run/docker/netns/1e63077d99c6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-579606": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:c5:38:18:07:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7f1c73ac1e12d550471cb62895be2add81ac8cf17de04960f0eacccc32c8d7ed",
	                    "EndpointID": "8a838023b9728c6ddb19ab298ea04b08bbc92e5f9a6d0fd03458d2e7e897eeff",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-579606",
	                        "641d4379c21a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-579606 -n newest-cni-579606
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-579606 -n newest-cni-579606: exit status 2 (310.651842ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-579606 logs -n 25
E1018 12:19:51.546808    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/kindnet-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-175371 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p embed-certs-175371 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ image   │ no-preload-406541 image list --format=json                                                                                                                                                                                                    │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p no-preload-406541 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ image   │ old-k8s-version-024443 image list --format=json                                                                                                                                                                                               │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ pause   │ -p old-k8s-version-024443 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ delete  │ -p no-preload-406541                                                                                                                                                                                                                          │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ delete  │ -p old-k8s-version-024443                                                                                                                                                                                                                     │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ delete  │ -p old-k8s-version-024443                                                                                                                                                                                                                     │ old-k8s-version-024443       │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ start   │ -p newest-cni-579606 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:19 UTC │
	│ delete  │ -p no-preload-406541                                                                                                                                                                                                                          │ no-preload-406541            │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ default-k8s-diff-port-028309 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ pause   │ -p default-k8s-diff-port-028309 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-579606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ stop    │ -p newest-cni-579606 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ delete  │ -p default-k8s-diff-port-028309                                                                                                                                                                                                               │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ image   │ embed-certs-175371 image list --format=json                                                                                                                                                                                                   │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ pause   │ -p embed-certs-175371 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-028309                                                                                                                                                                                                               │ default-k8s-diff-port-028309 │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ delete  │ -p embed-certs-175371                                                                                                                                                                                                                         │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ delete  │ -p embed-certs-175371                                                                                                                                                                                                                         │ embed-certs-175371           │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-579606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ start   │ -p newest-cni-579606 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ image   │ newest-cni-579606 image list --format=json                                                                                                                                                                                                    │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ pause   │ -p newest-cni-579606 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-579606            │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:19:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:19:36.137368  335075 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:19:36.137645  335075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:36.137657  335075 out.go:374] Setting ErrFile to fd 2...
	I1018 12:19:36.137664  335075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:36.137888  335075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:19:36.138388  335075 out.go:368] Setting JSON to false
	I1018 12:19:36.139434  335075 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3724,"bootTime":1760786252,"procs":283,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:19:36.139534  335075 start.go:141] virtualization: kvm guest
	I1018 12:19:36.141714  335075 out.go:179] * [newest-cni-579606] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:19:36.143243  335075 notify.go:220] Checking for updates...
	I1018 12:19:36.143289  335075 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:19:36.144910  335075 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:19:36.146574  335075 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:19:36.148070  335075 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:19:36.149395  335075 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:19:36.150771  335075 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:19:36.152502  335075 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:36.152934  335075 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:19:36.176992  335075 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:19:36.177143  335075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:19:36.233999  335075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-18 12:19:36.222342082 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:19:36.234144  335075 docker.go:318] overlay module found
	I1018 12:19:36.236207  335075 out.go:179] * Using the docker driver based on existing profile
	I1018 12:19:36.237645  335075 start.go:305] selected driver: docker
	I1018 12:19:36.237662  335075 start.go:925] validating driver "docker" against &{Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:19:36.237783  335075 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:19:36.238367  335075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:19:36.294808  335075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-18 12:19:36.284719824 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:19:36.295164  335075 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 12:19:36.295194  335075 cni.go:84] Creating CNI manager for ""
	I1018 12:19:36.295252  335075 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:19:36.295299  335075 start.go:349] cluster config:
	{Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:19:36.297532  335075 out.go:179] * Starting "newest-cni-579606" primary control-plane node in "newest-cni-579606" cluster
	I1018 12:19:36.299258  335075 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:19:36.300692  335075 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:19:36.301848  335075 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:19:36.301893  335075 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 12:19:36.301895  335075 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:19:36.301906  335075 cache.go:58] Caching tarball of preloaded images
	I1018 12:19:36.302098  335075 preload.go:233] Found /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 12:19:36.302112  335075 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:19:36.302204  335075 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json ...
	I1018 12:19:36.324652  335075 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:19:36.324678  335075 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:19:36.324701  335075 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:19:36.324743  335075 start.go:360] acquireMachinesLock for newest-cni-579606: {Name:mk4161cf0bf2eb93a8110dc388332ec9ca8fc5ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:19:36.324830  335075 start.go:364] duration metric: took 51.443µs to acquireMachinesLock for "newest-cni-579606"
	I1018 12:19:36.324854  335075 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:19:36.324864  335075 fix.go:54] fixHost starting: 
	I1018 12:19:36.325094  335075 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:36.342982  335075 fix.go:112] recreateIfNeeded on newest-cni-579606: state=Stopped err=<nil>
	W1018 12:19:36.343024  335075 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:19:36.345208  335075 out.go:252] * Restarting existing docker container for "newest-cni-579606" ...
	I1018 12:19:36.345312  335075 cli_runner.go:164] Run: docker start newest-cni-579606
	I1018 12:19:36.594314  335075 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:36.613801  335075 kic.go:430] container "newest-cni-579606" state is running.
	I1018 12:19:36.614215  335075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:36.633841  335075 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/config.json ...
	I1018 12:19:36.634099  335075 machine.go:93] provisionDockerMachine start ...
	I1018 12:19:36.634191  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:36.654222  335075 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:36.654471  335075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 12:19:36.654487  335075 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:19:36.655110  335075 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53330->127.0.0.1:33133: read: connection reset by peer
	I1018 12:19:39.790204  335075 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-579606
	
	I1018 12:19:39.790236  335075 ubuntu.go:182] provisioning hostname "newest-cni-579606"
	I1018 12:19:39.790300  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:39.809358  335075 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:39.809574  335075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 12:19:39.809591  335075 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-579606 && echo "newest-cni-579606" | sudo tee /etc/hostname
	I1018 12:19:39.952255  335075 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-579606
	
	I1018 12:19:39.952342  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:39.970495  335075 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:39.970743  335075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 12:19:39.970776  335075 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-579606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-579606/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-579606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:19:40.103918  335075 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:19:40.103950  335075 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-5865/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-5865/.minikube}
	I1018 12:19:40.104005  335075 ubuntu.go:190] setting up certificates
	I1018 12:19:40.104022  335075 provision.go:84] configureAuth start
	I1018 12:19:40.104077  335075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:40.123311  335075 provision.go:143] copyHostCerts
	I1018 12:19:40.123388  335075 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem, removing ...
	I1018 12:19:40.123413  335075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem
	I1018 12:19:40.123496  335075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/ca.pem (1082 bytes)
	I1018 12:19:40.123747  335075 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem, removing ...
	I1018 12:19:40.123785  335075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem
	I1018 12:19:40.123842  335075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/cert.pem (1123 bytes)
	I1018 12:19:40.123952  335075 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem, removing ...
	I1018 12:19:40.123965  335075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem
	I1018 12:19:40.124031  335075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-5865/.minikube/key.pem (1679 bytes)
	I1018 12:19:40.124134  335075 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem org=jenkins.newest-cni-579606 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-579606]
	I1018 12:19:40.379660  335075 provision.go:177] copyRemoteCerts
	I1018 12:19:40.379724  335075 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:19:40.379768  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:40.398109  335075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:40.497321  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:19:40.515000  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 12:19:40.532198  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:19:40.549409  335075 provision.go:87] duration metric: took 445.372225ms to configureAuth
	I1018 12:19:40.549443  335075 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:19:40.549604  335075 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:40.549688  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:40.568011  335075 main.go:141] libmachine: Using SSH client type: native
	I1018 12:19:40.568277  335075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83fde0] 0x842ae0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1018 12:19:40.568294  335075 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:19:40.831510  335075 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:19:40.831535  335075 machine.go:96] duration metric: took 4.197417627s to provisionDockerMachine
	I1018 12:19:40.831547  335075 start.go:293] postStartSetup for "newest-cni-579606" (driver="docker")
	I1018 12:19:40.831560  335075 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:19:40.831617  335075 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:19:40.831684  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:40.850007  335075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:40.946361  335075 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:19:40.949946  335075 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:19:40.949977  335075 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:19:40.949988  335075 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/addons for local assets ...
	I1018 12:19:40.950043  335075 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-5865/.minikube/files for local assets ...
	I1018 12:19:40.950123  335075 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem -> 93602.pem in /etc/ssl/certs
	I1018 12:19:40.950219  335075 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:19:40.957723  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:19:40.974965  335075 start.go:296] duration metric: took 143.401884ms for postStartSetup
	I1018 12:19:40.975058  335075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:19:40.975103  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:40.993512  335075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:41.087262  335075 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:19:41.092104  335075 fix.go:56] duration metric: took 4.767233113s for fixHost
	I1018 12:19:41.092134  335075 start.go:83] releasing machines lock for "newest-cni-579606", held for 4.767291003s
	I1018 12:19:41.092204  335075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-579606
	I1018 12:19:41.110754  335075 ssh_runner.go:195] Run: cat /version.json
	I1018 12:19:41.110818  335075 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:19:41.110835  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:41.110915  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:41.130109  335075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:41.130319  335075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:41.277617  335075 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:41.284424  335075 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:19:41.321160  335075 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:19:41.326237  335075 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:19:41.326321  335075 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:19:41.335085  335075 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:19:41.335111  335075 start.go:495] detecting cgroup driver to use...
	I1018 12:19:41.335142  335075 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 12:19:41.335189  335075 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:19:41.350564  335075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:19:41.363255  335075 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:19:41.363325  335075 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:19:41.378641  335075 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:19:41.391318  335075 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:19:41.472724  335075 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:19:41.553719  335075 docker.go:234] disabling docker service ...
	I1018 12:19:41.553812  335075 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:19:41.567833  335075 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:19:41.579981  335075 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:19:41.660366  335075 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:19:41.737906  335075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:19:41.751046  335075 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:19:41.766637  335075 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:19:41.766704  335075 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:41.775840  335075 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 12:19:41.775908  335075 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:41.784549  335075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:41.793137  335075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:41.802070  335075 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:19:41.810220  335075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:41.819325  335075 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:41.827701  335075 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:19:41.836535  335075 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:19:41.844196  335075 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:19:41.851604  335075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:41.931321  335075 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:19:42.037855  335075 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:19:42.037929  335075 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:19:42.041913  335075 start.go:563] Will wait 60s for crictl version
	I1018 12:19:42.041961  335075 ssh_runner.go:195] Run: which crictl
	I1018 12:19:42.045709  335075 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:19:42.071835  335075 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:19:42.071905  335075 ssh_runner.go:195] Run: crio --version
	I1018 12:19:42.099342  335075 ssh_runner.go:195] Run: crio --version
	I1018 12:19:42.130292  335075 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:19:42.131848  335075 cli_runner.go:164] Run: docker network inspect newest-cni-579606 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:19:42.149905  335075 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 12:19:42.153969  335075 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:19:42.166256  335075 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 12:19:42.167500  335075 kubeadm.go:883] updating cluster {Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:19:42.167619  335075 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:19:42.167679  335075 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:19:42.199119  335075 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:19:42.199141  335075 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:19:42.199187  335075 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:19:42.225021  335075 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:19:42.225043  335075 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:19:42.225051  335075 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 12:19:42.225165  335075 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-579606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:19:42.225227  335075 ssh_runner.go:195] Run: crio config
	I1018 12:19:42.272539  335075 cni.go:84] Creating CNI manager for ""
	I1018 12:19:42.272558  335075 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:19:42.272571  335075 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 12:19:42.272595  335075 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-579606 NodeName:newest-cni-579606 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:19:42.272746  335075 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-579606"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:19:42.272834  335075 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:19:42.281290  335075 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:19:42.281357  335075 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:19:42.289421  335075 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 12:19:42.302598  335075 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:19:42.316177  335075 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1018 12:19:42.329352  335075 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:19:42.333314  335075 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:19:42.343843  335075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:42.421738  335075 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:19:42.442404  335075 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606 for IP: 192.168.85.2
	I1018 12:19:42.442426  335075 certs.go:195] generating shared ca certs ...
	I1018 12:19:42.442445  335075 certs.go:227] acquiring lock for ca certs: {Name:mkf18db0aec0603f73244592bd04db96c46b8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:42.442689  335075 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key
	I1018 12:19:42.442753  335075 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key
	I1018 12:19:42.442788  335075 certs.go:257] generating profile certs ...
	I1018 12:19:42.442889  335075 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/client.key
	I1018 12:19:42.442966  335075 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key.54335aad
	I1018 12:19:42.443003  335075 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key
	I1018 12:19:42.443121  335075 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem (1338 bytes)
	W1018 12:19:42.443154  335075 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360_empty.pem, impossibly tiny 0 bytes
	I1018 12:19:42.443164  335075 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:19:42.443191  335075 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:19:42.443213  335075 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:19:42.443235  335075 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/certs/key.pem (1679 bytes)
	I1018 12:19:42.443271  335075 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem (1708 bytes)
	I1018 12:19:42.443855  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:19:42.463239  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:19:42.483034  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:19:42.503605  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:19:42.528923  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 12:19:42.547339  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:19:42.564875  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:19:42.581997  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/newest-cni-579606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:19:42.599183  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/certs/9360.pem --> /usr/share/ca-certificates/9360.pem (1338 bytes)
	I1018 12:19:42.616574  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/ssl/certs/93602.pem --> /usr/share/ca-certificates/93602.pem (1708 bytes)
	I1018 12:19:42.634715  335075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-5865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:19:42.653018  335075 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:19:42.665386  335075 ssh_runner.go:195] Run: openssl version
	I1018 12:19:42.671433  335075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:19:42.680058  335075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:42.683873  335075 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:42.683934  335075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:19:42.717886  335075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:19:42.726591  335075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9360.pem && ln -fs /usr/share/ca-certificates/9360.pem /etc/ssl/certs/9360.pem"
	I1018 12:19:42.735540  335075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9360.pem
	I1018 12:19:42.739669  335075 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9360.pem
	I1018 12:19:42.739729  335075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9360.pem
	I1018 12:19:42.774178  335075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9360.pem /etc/ssl/certs/51391683.0"
	I1018 12:19:42.782583  335075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93602.pem && ln -fs /usr/share/ca-certificates/93602.pem /etc/ssl/certs/93602.pem"
	I1018 12:19:42.791202  335075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93602.pem
	I1018 12:19:42.795126  335075 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/93602.pem
	I1018 12:19:42.795182  335075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93602.pem
	I1018 12:19:42.830258  335075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93602.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:19:42.838984  335075 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:19:42.842982  335075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:19:42.878568  335075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:19:42.913101  335075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:19:42.949213  335075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:19:42.997164  335075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:19:43.046288  335075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:19:43.096108  335075 kubeadm.go:400] StartCluster: {Name:newest-cni-579606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-579606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:19:43.096218  335075 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:43.096308  335075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:43.128660  335075 cri.go:89] found id: "53995b4d27c7ed8d1750a76428d42e3482e82b66648b564a8449012550c4dd21"
	I1018 12:19:43.128689  335075 cri.go:89] found id: "65e093865c154edbace2f9e377b1409b613c3dd057053e8b0d41c52ff85581f9"
	I1018 12:19:43.128695  335075 cri.go:89] found id: "3c70d0ad55b06bcec8f4631eccdcc42b9ffd4b815eb4f4b70fdbbfd7d1551822"
	I1018 12:19:43.128700  335075 cri.go:89] found id: "a98f4916acefd406445cdb9712752ed056428cdaa724922263c4b9e6f4e91287"
	I1018 12:19:43.128704  335075 cri.go:89] found id: ""
	I1018 12:19:43.128750  335075 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 12:19:43.140820  335075 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:43Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:19:43.140912  335075 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:19:43.148919  335075 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 12:19:43.148942  335075 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 12:19:43.149032  335075 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 12:19:43.156835  335075 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:19:43.157233  335075 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-579606" does not appear in /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:19:43.157325  335075 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-5865/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-579606" cluster setting kubeconfig missing "newest-cni-579606" context setting]
	I1018 12:19:43.157644  335075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:43.158908  335075 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 12:19:43.167198  335075 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 12:19:43.167239  335075 kubeadm.go:601] duration metric: took 18.284745ms to restartPrimaryControlPlane
	I1018 12:19:43.167250  335075 kubeadm.go:402] duration metric: took 71.151656ms to StartCluster
	I1018 12:19:43.167268  335075 settings.go:142] acquiring lock: {Name:mk85e05213f6fb6297c621146263971d0010a36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:43.167347  335075 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:19:43.168095  335075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-5865/kubeconfig: {Name:mk54ee9ce511db65f95d71044d27029a393a9a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:19:43.168356  335075 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:19:43.168424  335075 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:19:43.168533  335075 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-579606"
	I1018 12:19:43.168554  335075 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-579606"
	W1018 12:19:43.168566  335075 addons.go:247] addon storage-provisioner should already be in state true
	I1018 12:19:43.168572  335075 addons.go:69] Setting dashboard=true in profile "newest-cni-579606"
	I1018 12:19:43.168597  335075 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:43.168599  335075 addons.go:238] Setting addon dashboard=true in "newest-cni-579606"
	W1018 12:19:43.168608  335075 addons.go:247] addon dashboard should already be in state true
	I1018 12:19:43.168617  335075 config.go:182] Loaded profile config "newest-cni-579606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:43.168641  335075 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:43.168663  335075 addons.go:69] Setting default-storageclass=true in profile "newest-cni-579606"
	I1018 12:19:43.168676  335075 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-579606"
	I1018 12:19:43.168954  335075 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:43.169093  335075 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:43.169124  335075 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:43.171146  335075 out.go:179] * Verifying Kubernetes components...
	I1018 12:19:43.172605  335075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:19:43.195595  335075 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 12:19:43.196886  335075 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 12:19:43.198141  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 12:19:43.198165  335075 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 12:19:43.198143  335075 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:19:43.198243  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:43.198458  335075 addons.go:238] Setting addon default-storageclass=true in "newest-cni-579606"
	W1018 12:19:43.198483  335075 addons.go:247] addon default-storageclass should already be in state true
	I1018 12:19:43.198516  335075 host.go:66] Checking if "newest-cni-579606" exists ...
	I1018 12:19:43.198930  335075 cli_runner.go:164] Run: docker container inspect newest-cni-579606 --format={{.State.Status}}
	I1018 12:19:43.204443  335075 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:19:43.204465  335075 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:19:43.204519  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:43.230773  335075 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:19:43.230850  335075 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:19:43.230942  335075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-579606
	I1018 12:19:43.231297  335075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:43.238172  335075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:43.253859  335075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/newest-cni-579606/id_rsa Username:docker}
	I1018 12:19:43.311743  335075 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:19:43.325144  335075 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:19:43.325239  335075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:19:43.338124  335075 api_server.go:72] duration metric: took 169.733551ms to wait for apiserver process to appear ...
	I1018 12:19:43.338159  335075 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:19:43.338179  335075 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:43.344910  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 12:19:43.344935  335075 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 12:19:43.351039  335075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:19:43.360647  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 12:19:43.360672  335075 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 12:19:43.366194  335075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:19:43.376227  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 12:19:43.376253  335075 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 12:19:43.391550  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 12:19:43.391575  335075 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 12:19:43.405706  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 12:19:43.405787  335075 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 12:19:43.420685  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 12:19:43.420717  335075 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 12:19:43.436142  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 12:19:43.436169  335075 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 12:19:43.449040  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 12:19:43.449067  335075 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 12:19:43.461318  335075 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:19:43.461339  335075 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 12:19:43.473499  335075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 12:19:44.682167  335075 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 12:19:44.682195  335075 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 12:19:44.682209  335075 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:44.723269  335075 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 12:19:44.723304  335075 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 12:19:44.838408  335075 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:44.844293  335075 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:19:44.844335  335075 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:19:45.216718  335075 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.865639185s)
	I1018 12:19:45.216789  335075 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.850564284s)
	I1018 12:19:45.216936  335075 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.74339849s)
	I1018 12:19:45.218674  335075 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-579606 addons enable metrics-server
	
	I1018 12:19:45.228292  335075 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 12:19:45.229793  335075 addons.go:514] duration metric: took 2.061377114s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 12:19:45.339263  335075 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:45.343421  335075 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:19:45.343468  335075 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:19:45.838941  335075 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:45.843542  335075 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:19:45.843580  335075 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:19:46.338393  335075 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 12:19:46.342980  335075 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 12:19:46.344478  335075 api_server.go:141] control plane version: v1.34.1
	I1018 12:19:46.344503  335075 api_server.go:131] duration metric: took 3.006338044s to wait for apiserver health ...
	I1018 12:19:46.344512  335075 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:19:46.348611  335075 system_pods.go:59] 8 kube-system pods found
	I1018 12:19:46.348643  335075 system_pods.go:61] "coredns-66bc5c9577-p6bts" [49609244-6dc2-4950-8fad-8240b827ecca] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 12:19:46.348652  335075 system_pods.go:61] "etcd-newest-cni-579606" [496c00b4-7ad1-40c0-a440-c396a752cbf4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:19:46.348661  335075 system_pods.go:61] "kindnet-2c4t6" [08c0018d-0f0f-435e-8868-31818d5639fa] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 12:19:46.348668  335075 system_pods.go:61] "kube-apiserver-newest-cni-579606" [a39961c7-019e-41ec-8843-e98e9c2e3604] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:19:46.348674  335075 system_pods.go:61] "kube-controller-manager-newest-cni-579606" [992bd82d-6489-43da-83ba-8dcb6b86fe48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:19:46.348682  335075 system_pods.go:61] "kube-proxy-5hjgn" [915df613-23ce-49e2-b125-d223024077b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 12:19:46.348687  335075 system_pods.go:61] "kube-scheduler-newest-cni-579606" [2a1de39e-4fa6-49e8-a420-75a6c82ac73e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:19:46.348702  335075 system_pods.go:61] "storage-provisioner" [c7ff4c04-56e5-469b-9af2-dc1bf4fe969d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 12:19:46.348708  335075 system_pods.go:74] duration metric: took 4.191579ms to wait for pod list to return data ...
	I1018 12:19:46.348717  335075 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:19:46.351336  335075 default_sa.go:45] found service account: "default"
	I1018 12:19:46.351359  335075 default_sa.go:55] duration metric: took 2.63432ms for default service account to be created ...
	I1018 12:19:46.351371  335075 kubeadm.go:586] duration metric: took 3.182987363s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 12:19:46.351388  335075 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:19:46.354183  335075 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 12:19:46.354209  335075 node_conditions.go:123] node cpu capacity is 8
	I1018 12:19:46.354223  335075 node_conditions.go:105] duration metric: took 2.830056ms to run NodePressure ...
	I1018 12:19:46.354236  335075 start.go:241] waiting for startup goroutines ...
	I1018 12:19:46.354261  335075 start.go:246] waiting for cluster config update ...
	I1018 12:19:46.354280  335075 start.go:255] writing updated cluster config ...
	I1018 12:19:46.354652  335075 ssh_runner.go:195] Run: rm -f paused
	I1018 12:19:46.404669  335075 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 12:19:46.407603  335075 out.go:179] * Done! kubectl is now configured to use "newest-cni-579606" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.817935912Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.820946927Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7c63f3f5-72e3-46ab-bed7-a491e11d40b0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.821705022Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=724d4e8f-4a78-43c3-83f0-3268f46f18c7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.822674352Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.823405859Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.823480394Z" level=info msg="Ran pod sandbox b90a998c71672440b7bf6a661a14abdf03d86b1f8701b7dca5efffd667de4b46 with infra container: kube-system/kube-proxy-5hjgn/POD" id=7c63f3f5-72e3-46ab-bed7-a491e11d40b0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.82433523Z" level=info msg="Ran pod sandbox 464d103065151409ad9ab31e667d4287a1dd1d8eb263b49bd4de2e487954f411 with infra container: kube-system/kindnet-2c4t6/POD" id=724d4e8f-4a78-43c3-83f0-3268f46f18c7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.824945573Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9e552546-9b96-4070-8feb-ae29b0afe460 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.825551751Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=532c83b3-a5be-4d4e-af78-732fbc72e2e7 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.825946247Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=c2949ee0-20b6-4130-a93f-708817c8bdda name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.826481683Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=96c5cf27-248a-4011-ac8d-a65c370189ca name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.826980603Z" level=info msg="Creating container: kube-system/kube-proxy-5hjgn/kube-proxy" id=d9c3d031-d806-4f0f-b803-b09dbab7ec08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.827251157Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.828383346Z" level=info msg="Creating container: kube-system/kindnet-2c4t6/kindnet-cni" id=3b0038ce-c7a1-48f6-a2ee-c85ef5bb4c8a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.829856091Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.832806845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.833479835Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.835294151Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.836365205Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.860634567Z" level=info msg="Created container f77ce49aa964ce8c11b798ebb5a3965e54e02acb5fb351ec42a7874232b68f06: kube-system/kindnet-2c4t6/kindnet-cni" id=3b0038ce-c7a1-48f6-a2ee-c85ef5bb4c8a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.861370228Z" level=info msg="Starting container: f77ce49aa964ce8c11b798ebb5a3965e54e02acb5fb351ec42a7874232b68f06" id=1f7287db-0def-4daa-b1fb-9d63cfe42467 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.863365802Z" level=info msg="Started container" PID=1039 containerID=f77ce49aa964ce8c11b798ebb5a3965e54e02acb5fb351ec42a7874232b68f06 description=kube-system/kindnet-2c4t6/kindnet-cni id=1f7287db-0def-4daa-b1fb-9d63cfe42467 name=/runtime.v1.RuntimeService/StartContainer sandboxID=464d103065151409ad9ab31e667d4287a1dd1d8eb263b49bd4de2e487954f411
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.866371547Z" level=info msg="Created container b014e2d1379a4cbaea0d383d7a9062226eff1bd74baf23d918d241a37d506967: kube-system/kube-proxy-5hjgn/kube-proxy" id=d9c3d031-d806-4f0f-b803-b09dbab7ec08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.867039133Z" level=info msg="Starting container: b014e2d1379a4cbaea0d383d7a9062226eff1bd74baf23d918d241a37d506967" id=ef4c25c6-7d40-4c29-8983-d2354e6c0899 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:19:45 newest-cni-579606 crio[519]: time="2025-10-18T12:19:45.86983067Z" level=info msg="Started container" PID=1040 containerID=b014e2d1379a4cbaea0d383d7a9062226eff1bd74baf23d918d241a37d506967 description=kube-system/kube-proxy-5hjgn/kube-proxy id=ef4c25c6-7d40-4c29-8983-d2354e6c0899 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b90a998c71672440b7bf6a661a14abdf03d86b1f8701b7dca5efffd667de4b46
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f77ce49aa964c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   464d103065151       kindnet-2c4t6                               kube-system
	b014e2d1379a4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   b90a998c71672       kube-proxy-5hjgn                            kube-system
	53995b4d27c7e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   a79a7939a351a       etcd-newest-cni-579606                      kube-system
	65e093865c154       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   ef32e3abb377d       kube-controller-manager-newest-cni-579606   kube-system
	3c70d0ad55b06       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   367a6f7bfe8bc       kube-scheduler-newest-cni-579606            kube-system
	a98f4916acefd       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   32c85241bce3f       kube-apiserver-newest-cni-579606            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-579606
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-579606
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=newest-cni-579606
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_19_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:19:12 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-579606
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:19:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:19:44 +0000   Sat, 18 Oct 2025 12:19:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:19:44 +0000   Sat, 18 Oct 2025 12:19:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:19:44 +0000   Sat, 18 Oct 2025 12:19:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 12:19:44 +0000   Sat, 18 Oct 2025 12:19:10 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-579606
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                36059274-aa96-46ac-88d0-180e17b44739
	  Boot ID:                    6773a282-37fa-47b1-b6ae-942a8630a1f6
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-579606                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         37s
	  kube-system                 kindnet-2c4t6                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-newest-cni-579606             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-newest-cni-579606    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-5hjgn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-newest-cni-579606             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 30s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s (x8 over 43s)  kubelet          Node newest-cni-579606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x8 over 43s)  kubelet          Node newest-cni-579606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x8 over 43s)  kubelet          Node newest-cni-579606 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    37s                kubelet          Node newest-cni-579606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  37s                kubelet          Node newest-cni-579606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     37s                kubelet          Node newest-cni-579606 status is now: NodeHasSufficientPID
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           33s                node-controller  Node newest-cni-579606 event: Registered Node newest-cni-579606 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x4 over 10s)  kubelet          Node newest-cni-579606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x4 over 10s)  kubelet          Node newest-cni-579606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x4 over 10s)  kubelet          Node newest-cni-579606 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-579606 event: Registered Node newest-cni-579606 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +11.948953] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 93 07 de 40 6d 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 2f a5 3a 37 fc 08 06
	[  +0.204454] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 4b 47 1f ce e5 08 06
	[Oct18 12:16] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 88 62 1b dd a7 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 f1 aa 42 b3 1d 08 06
	[  +0.000901] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee c1 85 1f 6c 4c 08 06
	[ +26.035563] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 9e 15 3f 0e e1 08 06
	[  +0.000631] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 55 46 ae a1 7f 08 06
	[  +2.492998] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 63 10 7e 7b f1 08 06
	[  +0.001695] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	[ +18.118461] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e eb 77 72 c6 18 08 06
	[  +0.000342] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 9b 2e e7 1e fb 08 06
	
	
	==> etcd [53995b4d27c7ed8d1750a76428d42e3482e82b66648b564a8449012550c4dd21] <==
	{"level":"warn","ts":"2025-10-18T12:19:44.087312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.093700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.100031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.108705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.115451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.121562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.135405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.141832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.148065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.154669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.161893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.168829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.175971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.182235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.194888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.201793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.208372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.214698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.221729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.228512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.234473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.246027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.252377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.258731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:19:44.306573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60712","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:19:52 up  1:02,  0 user,  load average: 2.85, 3.71, 2.59
	Linux newest-cni-579606 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f77ce49aa964ce8c11b798ebb5a3965e54e02acb5fb351ec42a7874232b68f06] <==
	I1018 12:19:46.059115       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:19:46.059394       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 12:19:46.059541       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:19:46.059556       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:19:46.059579       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:19:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:19:46.259877       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:19:46.357015       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:19:46.357041       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:19:46.357356       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:19:46.757381       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:19:46.757412       1 metrics.go:72] Registering metrics
	I1018 12:19:46.757494       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [a98f4916acefd406445cdb9712752ed056428cdaa724922263c4b9e6f4e91287] <==
	I1018 12:19:44.777858       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 12:19:44.778046       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 12:19:44.778124       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 12:19:44.778299       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 12:19:44.778536       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 12:19:44.777650       1 aggregator.go:171] initial CRD sync complete...
	I1018 12:19:44.778606       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 12:19:44.778613       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 12:19:44.778620       1 cache.go:39] Caches are synced for autoregister controller
	I1018 12:19:44.784090       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 12:19:44.789018       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 12:19:44.789058       1 policy_source.go:240] refreshing policies
	I1018 12:19:44.808383       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:19:45.024942       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:19:45.055312       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:19:45.077206       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:19:45.087113       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:19:45.094895       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:19:45.132554       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.206.156"}
	I1018 12:19:45.145168       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.75.222"}
	I1018 12:19:45.680946       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:19:48.457955       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:19:48.458003       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:19:48.507253       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:19:48.606003       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [65e093865c154edbace2f9e377b1409b613c3dd057053e8b0d41c52ff85581f9] <==
	I1018 12:19:48.084193       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 12:19:48.089604       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 12:19:48.093937       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 12:19:48.096259       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 12:19:48.098529       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 12:19:48.099750       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:19:48.099794       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:19:48.099852       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 12:19:48.103557       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 12:19:48.103585       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 12:19:48.103643       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 12:19:48.103655       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 12:19:48.103691       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 12:19:48.103714       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 12:19:48.103788       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 12:19:48.103877       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-579606"
	I1018 12:19:48.103950       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 12:19:48.104240       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 12:19:48.106096       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 12:19:48.109453       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:19:48.114835       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:19:48.116052       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:19:48.116074       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:19:48.116089       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:19:48.129351       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b014e2d1379a4cbaea0d383d7a9062226eff1bd74baf23d918d241a37d506967] <==
	I1018 12:19:45.905434       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:19:45.974668       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:19:46.075343       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:19:46.075391       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 12:19:46.075481       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:19:46.095432       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:19:46.095502       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:19:46.100821       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:19:46.101259       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:19:46.101281       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:19:46.102650       1 config.go:200] "Starting service config controller"
	I1018 12:19:46.102701       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:19:46.102776       1 config.go:309] "Starting node config controller"
	I1018 12:19:46.102791       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:19:46.102924       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:19:46.103346       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:19:46.103439       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:19:46.103811       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:19:46.203672       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:19:46.203700       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:19:46.203714       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:19:46.204842       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3c70d0ad55b06bcec8f4631eccdcc42b9ffd4b815eb4f4b70fdbbfd7d1551822] <==
	W1018 12:19:44.698574       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 12:19:44.698611       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 12:19:44.698623       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 12:19:44.698633       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 12:19:44.740085       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 12:19:44.740204       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:19:44.743329       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:19:44.743482       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:19:44.743499       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:19:44.743522       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:19:44.748860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:19:44.749151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:19:44.749303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:19:44.749465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:19:44.749791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:19:44.750478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:19:44.750803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:19:44.751494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:19:44.751648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:19:44.751955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:19:44.752285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:19:44.752642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:19:44.752747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:19:44.758505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1018 12:19:44.843654       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: E1018 12:19:44.548635     668 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-579606\" not found" node="newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: I1018 12:19:44.813191     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: I1018 12:19:44.813311     668 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: I1018 12:19:44.813405     668 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: I1018 12:19:44.813443     668 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: I1018 12:19:44.814321     668 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: E1018 12:19:44.825402     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-579606\" already exists" pod="kube-system/etcd-newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: I1018 12:19:44.825442     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: E1018 12:19:44.832320     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-579606\" already exists" pod="kube-system/kube-apiserver-newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: I1018 12:19:44.832354     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: E1018 12:19:44.839600     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-579606\" already exists" pod="kube-system/kube-controller-manager-newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: I1018 12:19:44.839636     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-579606"
	Oct 18 12:19:44 newest-cni-579606 kubelet[668]: E1018 12:19:44.846448     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-579606\" already exists" pod="kube-system/kube-scheduler-newest-cni-579606"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: I1018 12:19:45.508985     668 apiserver.go:52] "Watching apiserver"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: I1018 12:19:45.549634     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-579606"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: E1018 12:19:45.558962     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-579606\" already exists" pod="kube-system/kube-apiserver-newest-cni-579606"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: I1018 12:19:45.612076     668 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: I1018 12:19:45.619168     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/915df613-23ce-49e2-b125-d223024077b0-xtables-lock\") pod \"kube-proxy-5hjgn\" (UID: \"915df613-23ce-49e2-b125-d223024077b0\") " pod="kube-system/kube-proxy-5hjgn"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: I1018 12:19:45.619314     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/915df613-23ce-49e2-b125-d223024077b0-lib-modules\") pod \"kube-proxy-5hjgn\" (UID: \"915df613-23ce-49e2-b125-d223024077b0\") " pod="kube-system/kube-proxy-5hjgn"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: I1018 12:19:45.619356     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/08c0018d-0f0f-435e-8868-31818d5639fa-cni-cfg\") pod \"kindnet-2c4t6\" (UID: \"08c0018d-0f0f-435e-8868-31818d5639fa\") " pod="kube-system/kindnet-2c4t6"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: I1018 12:19:45.619421     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08c0018d-0f0f-435e-8868-31818d5639fa-xtables-lock\") pod \"kindnet-2c4t6\" (UID: \"08c0018d-0f0f-435e-8868-31818d5639fa\") " pod="kube-system/kindnet-2c4t6"
	Oct 18 12:19:45 newest-cni-579606 kubelet[668]: I1018 12:19:45.619435     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08c0018d-0f0f-435e-8868-31818d5639fa-lib-modules\") pod \"kindnet-2c4t6\" (UID: \"08c0018d-0f0f-435e-8868-31818d5639fa\") " pod="kube-system/kindnet-2c4t6"
	Oct 18 12:19:47 newest-cni-579606 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 12:19:47 newest-cni-579606 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 12:19:47 newest-cni-579606 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-579606 -n newest-cni-579606
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-579606 -n newest-cni-579606: exit status 2 (319.977987ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-579606 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-p6bts storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m7ktk kubernetes-dashboard-855c9754f9-25499
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-579606 describe pod coredns-66bc5c9577-p6bts storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m7ktk kubernetes-dashboard-855c9754f9-25499
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-579606 describe pod coredns-66bc5c9577-p6bts storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m7ktk kubernetes-dashboard-855c9754f9-25499: exit status 1 (63.045543ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-p6bts" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-m7ktk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-25499" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-579606 describe pod coredns-66bc5c9577-p6bts storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m7ktk kubernetes-dashboard-855c9754f9-25499: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.76s)

                                                
                                    

Test pass (264/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.82
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 3.82
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.38
21 TestBinaryMirror 0.8
22 TestOffline 89.21
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 132.21
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/FakeCredentials 8.45
48 TestAddons/StoppedEnableDisable 16.65
49 TestCertOptions 32.05
50 TestCertExpiration 212.55
52 TestForceSystemdFlag 27.46
53 TestForceSystemdEnv 31.26
55 TestKVMDriverInstallOrUpdate 0.56
59 TestErrorSpam/setup 22.49
60 TestErrorSpam/start 0.63
61 TestErrorSpam/status 0.9
62 TestErrorSpam/pause 6.13
63 TestErrorSpam/unpause 4.54
64 TestErrorSpam/stop 2.56
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 69.23
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.18
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.15
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.79
76 TestFunctional/serial/CacheCmd/cache/add_local 0.77
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.48
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 68.38
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.2
87 TestFunctional/serial/LogsFileCmd 1.21
88 TestFunctional/serial/InvalidService 3.93
90 TestFunctional/parallel/ConfigCmd 0.36
91 TestFunctional/parallel/DashboardCmd 6.37
92 TestFunctional/parallel/DryRun 0.36
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 1.05
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 23.69
102 TestFunctional/parallel/SSHCmd 0.63
103 TestFunctional/parallel/CpCmd 1.97
104 TestFunctional/parallel/MySQL 14.56
105 TestFunctional/parallel/FileSync 0.32
106 TestFunctional/parallel/CertSync 1.55
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
114 TestFunctional/parallel/License 0.28
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.57
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.67
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.87
122 TestFunctional/parallel/ImageCommands/Setup 0.44
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.42
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.25
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
146 TestFunctional/parallel/ProfileCmd/profile_list 0.44
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
148 TestFunctional/parallel/MountCmd/any-port 6.81
149 TestFunctional/parallel/MountCmd/specific-port 1.95
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.75
151 TestFunctional/parallel/ServiceCmd/List 1.69
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 149.41
164 TestMultiControlPlane/serial/DeployApp 4.38
165 TestMultiControlPlane/serial/PingHostFromPods 0.95
166 TestMultiControlPlane/serial/AddWorkerNode 54.61
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
169 TestMultiControlPlane/serial/CopyFile 16.51
170 TestMultiControlPlane/serial/StopSecondaryNode 19.77
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
172 TestMultiControlPlane/serial/RestartSecondaryNode 9.26
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 111.29
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.56
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
177 TestMultiControlPlane/serial/StopCluster 42.86
178 TestMultiControlPlane/serial/RestartCluster 57.72
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
180 TestMultiControlPlane/serial/AddSecondaryNode 43.13
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
185 TestJSONOutput/start/Command 40.07
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.96
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.21
210 TestKicCustomNetwork/create_custom_network 30.39
211 TestKicCustomNetwork/use_default_bridge_network 24.07
212 TestKicExistingNetwork 23.67
213 TestKicCustomSubnet 27.61
214 TestKicStaticIP 26.55
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 48.1
219 TestMountStart/serial/StartWithMountFirst 5.53
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 5.36
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.7
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.25
226 TestMountStart/serial/RestartStopped 7.35
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 93.16
231 TestMultiNode/serial/DeployApp2Nodes 3.45
232 TestMultiNode/serial/PingHostFrom2Pods 0.65
233 TestMultiNode/serial/AddNode 24.02
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.63
236 TestMultiNode/serial/CopyFile 9.44
237 TestMultiNode/serial/StopNode 2.23
238 TestMultiNode/serial/StartAfterStop 7.24
239 TestMultiNode/serial/RestartKeepsNodes 81.55
240 TestMultiNode/serial/DeleteNode 5.21
241 TestMultiNode/serial/StopMultiNode 30.29
242 TestMultiNode/serial/RestartMultiNode 50.46
243 TestMultiNode/serial/ValidateNameConflict 23.96
248 TestPreload 83.93
250 TestScheduledStopUnix 96.3
253 TestInsufficientStorage 12.73
254 TestRunningBinaryUpgrade 46.72
256 TestKubernetesUpgrade 318.56
257 TestMissingContainerUpgrade 98.09
265 TestNetworkPlugins/group/false 10.88
269 TestStoppedBinaryUpgrade/Setup 0.55
270 TestStoppedBinaryUpgrade/Upgrade 48.45
279 TestPause/serial/Start 41.88
280 TestStoppedBinaryUpgrade/MinikubeLogs 1
281 TestPause/serial/SecondStartNoReconfiguration 6.61
284 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
285 TestNoKubernetes/serial/StartWithK8s 22.55
286 TestNetworkPlugins/group/auto/Start 44.16
287 TestNoKubernetes/serial/StartWithStopK8s 8.61
288 TestNoKubernetes/serial/Start 4.81
289 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
290 TestNoKubernetes/serial/ProfileList 1.77
291 TestNoKubernetes/serial/Stop 1.29
292 TestNoKubernetes/serial/StartNoArgs 6.56
293 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
294 TestNetworkPlugins/group/kindnet/Start 40.28
295 TestNetworkPlugins/group/auto/KubeletFlags 0.32
296 TestNetworkPlugins/group/auto/NetCatPod 8.32
297 TestNetworkPlugins/group/auto/DNS 0.14
298 TestNetworkPlugins/group/auto/Localhost 0.1
299 TestNetworkPlugins/group/auto/HairPin 0.1
300 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
301 TestNetworkPlugins/group/calico/Start 50.55
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
303 TestNetworkPlugins/group/kindnet/NetCatPod 10.46
304 TestNetworkPlugins/group/kindnet/DNS 0.11
305 TestNetworkPlugins/group/kindnet/Localhost 0.09
306 TestNetworkPlugins/group/kindnet/HairPin 0.09
307 TestNetworkPlugins/group/custom-flannel/Start 54.36
308 TestNetworkPlugins/group/enable-default-cni/Start 40.67
309 TestNetworkPlugins/group/flannel/Start 47.82
310 TestNetworkPlugins/group/calico/ControllerPod 6.01
311 TestNetworkPlugins/group/calico/KubeletFlags 0.38
312 TestNetworkPlugins/group/calico/NetCatPod 8.26
313 TestNetworkPlugins/group/calico/DNS 0.12
314 TestNetworkPlugins/group/calico/Localhost 0.09
315 TestNetworkPlugins/group/calico/HairPin 0.09
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
317 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
318 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.38
319 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.34
320 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
321 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
322 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
323 TestNetworkPlugins/group/bridge/Start 37.25
324 TestNetworkPlugins/group/custom-flannel/DNS 0.15
325 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
326 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
327 TestNetworkPlugins/group/flannel/ControllerPod 6.01
328 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
329 TestNetworkPlugins/group/flannel/NetCatPod 9.24
331 TestStartStop/group/old-k8s-version/serial/FirstStart 53.5
333 TestStartStop/group/no-preload/serial/FirstStart 55.64
334 TestNetworkPlugins/group/flannel/DNS 0.15
335 TestNetworkPlugins/group/flannel/Localhost 0.12
336 TestNetworkPlugins/group/flannel/HairPin 0.12
337 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
338 TestNetworkPlugins/group/bridge/NetCatPod 9.25
339 TestNetworkPlugins/group/bridge/DNS 0.11
340 TestNetworkPlugins/group/bridge/Localhost 0.1
341 TestNetworkPlugins/group/bridge/HairPin 0.09
343 TestStartStop/group/embed-certs/serial/FirstStart 70.64
345 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40.46
346 TestStartStop/group/old-k8s-version/serial/DeployApp 9.46
347 TestStartStop/group/no-preload/serial/DeployApp 7.26
349 TestStartStop/group/old-k8s-version/serial/Stop 16.35
351 TestStartStop/group/no-preload/serial/Stop 16.31
352 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
353 TestStartStop/group/old-k8s-version/serial/SecondStart 49.92
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
355 TestStartStop/group/no-preload/serial/SecondStart 46.44
356 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
358 TestStartStop/group/embed-certs/serial/DeployApp 8.28
359 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.66
361 TestStartStop/group/embed-certs/serial/Stop 18.15
362 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
363 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.03
364 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
365 TestStartStop/group/embed-certs/serial/SecondStart 45.65
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
367 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
368 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
369 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
370 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
372 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
375 TestStartStop/group/newest-cni/serial/FirstStart 26.45
376 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
377 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
378 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
381 TestStartStop/group/newest-cni/serial/DeployApp 0
383 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
384 TestStartStop/group/newest-cni/serial/Stop 12.54
385 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
387 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
388 TestStartStop/group/newest-cni/serial/SecondStart 10.67
389 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
390 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
x
+
TestDownloadOnly/v1.28.0/json-events (4.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-584755 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-584755 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.818869274s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1018 11:29:02.335930    9360 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1018 11:29:02.336028    9360 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-584755
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-584755: exit status 85 (58.880657ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-584755 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-584755 │ jenkins │ v1.37.0 │ 18 Oct 25 11:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 11:28:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 11:28:57.556717    9372 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:28:57.556861    9372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:28:57.556871    9372 out.go:374] Setting ErrFile to fd 2...
	I1018 11:28:57.556875    9372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:28:57.557065    9372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	W1018 11:28:57.557197    9372 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21647-5865/.minikube/config/config.json: open /home/jenkins/minikube-integration/21647-5865/.minikube/config/config.json: no such file or directory
	I1018 11:28:57.557682    9372 out.go:368] Setting JSON to true
	I1018 11:28:57.558556    9372 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":686,"bootTime":1760786252,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 11:28:57.558650    9372 start.go:141] virtualization: kvm guest
	I1018 11:28:57.561088    9372 out.go:99] [download-only-584755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1018 11:28:57.561250    9372 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball: no such file or directory
	I1018 11:28:57.561293    9372 notify.go:220] Checking for updates...
	I1018 11:28:57.562782    9372 out.go:171] MINIKUBE_LOCATION=21647
	I1018 11:28:57.564391    9372 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:28:57.565819    9372 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 11:28:57.567107    9372 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 11:28:57.568424    9372 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1018 11:28:57.571017    9372 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 11:28:57.571251    9372 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:28:57.595233    9372 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 11:28:57.595301    9372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:28:58.000423    9372 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 11:28:57.988383521 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 11:28:58.000521    9372 docker.go:318] overlay module found
	I1018 11:28:58.002042    9372 out.go:99] Using the docker driver based on user configuration
	I1018 11:28:58.002075    9372 start.go:305] selected driver: docker
	I1018 11:28:58.002085    9372 start.go:925] validating driver "docker" against <nil>
	I1018 11:28:58.002194    9372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:28:58.059773    9372 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 11:28:58.048985825 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 11:28:58.059953    9372 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 11:28:58.060458    9372 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1018 11:28:58.060597    9372 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 11:28:58.062312    9372 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-584755 host does not exist
	  To start a cluster, run: "minikube start -p download-only-584755"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-584755
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-147645 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-147645 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.817795833s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1018 11:29:06.560716    9360 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1018 11:29:06.560754    9360 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-5865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-147645
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-147645: exit status 85 (63.066803ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-584755 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-584755 │ jenkins │ v1.37.0 │ 18 Oct 25 11:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ delete  │ -p download-only-584755                                                                                                                                                   │ download-only-584755 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │ 18 Oct 25 11:29 UTC │
	│ start   │ -o=json --download-only -p download-only-147645 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-147645 │ jenkins │ v1.37.0 │ 18 Oct 25 11:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 11:29:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 11:29:02.782799    9723 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:29:02.783043    9723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:29:02.783059    9723 out.go:374] Setting ErrFile to fd 2...
	I1018 11:29:02.783064    9723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:29:02.783238    9723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:29:02.783693    9723 out.go:368] Setting JSON to true
	I1018 11:29:02.784479    9723 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":691,"bootTime":1760786252,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 11:29:02.784577    9723 start.go:141] virtualization: kvm guest
	I1018 11:29:02.786280    9723 out.go:99] [download-only-147645] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 11:29:02.786405    9723 notify.go:220] Checking for updates...
	I1018 11:29:02.787825    9723 out.go:171] MINIKUBE_LOCATION=21647
	I1018 11:29:02.789663    9723 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:29:02.790897    9723 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 11:29:02.795014    9723 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 11:29:02.796486    9723 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1018 11:29:02.799198    9723 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 11:29:02.799473    9723 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:29:02.823546    9723 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 11:29:02.823616    9723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:29:02.883838    9723 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:54 SystemTime:2025-10-18 11:29:02.872748352 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 11:29:02.883977    9723 docker.go:318] overlay module found
	I1018 11:29:02.885695    9723 out.go:99] Using the docker driver based on user configuration
	I1018 11:29:02.885742    9723 start.go:305] selected driver: docker
	I1018 11:29:02.885754    9723 start.go:925] validating driver "docker" against <nil>
	I1018 11:29:02.885871    9723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:29:02.943490    9723 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:54 SystemTime:2025-10-18 11:29:02.934358586 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 11:29:02.943679    9723 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 11:29:02.944193    9723 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1018 11:29:02.944346    9723 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 11:29:02.946607    9723 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-147645 host does not exist
	  To start a cluster, run: "minikube start -p download-only-147645"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-147645
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-063309 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-063309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-063309
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I1018 11:29:07.622278    9360 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-525445 --alsologtostderr --binary-mirror http://127.0.0.1:46875 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-525445" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-525445
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (89.21s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-285533 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-285533 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m26.707891966s)
helpers_test.go:175: Cleaning up "offline-crio-285533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-285533
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-285533: (2.499040588s)
--- PASS: TestOffline (89.21s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-162665
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-162665: exit status 85 (51.464799ms)

                                                
                                                
-- stdout --
	* Profile "addons-162665" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-162665"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-162665
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-162665: exit status 85 (53.973391ms)

                                                
                                                
-- stdout --
	* Profile "addons-162665" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-162665"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (132.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-162665 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-162665 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m12.206950599s)
--- PASS: TestAddons/Setup (132.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-162665 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-162665 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.45s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-162665 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-162665 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [63e62b2d-6b2a-4e68-be20-6ccd92ea0265] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [63e62b2d-6b2a-4e68-be20-6ccd92ea0265] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.002709063s
addons_test.go:694: (dbg) Run:  kubectl --context addons-162665 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-162665 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-162665 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.65s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-162665
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-162665: (16.403258901s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-162665
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-162665
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-162665
--- PASS: TestAddons/StoppedEnableDisable (16.65s)

                                                
                                    
x
+
TestCertOptions (32.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-473888 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-473888 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (28.929941224s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-473888 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-473888 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-473888 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-473888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-473888
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-473888: (2.456307821s)
--- PASS: TestCertOptions (32.05s)

                                                
                                    
x
+
TestCertExpiration (212.55s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-382425 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-382425 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (23.457743349s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-382425 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-382425 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.528139656s)
helpers_test.go:175: Cleaning up "cert-expiration-382425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-382425
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-382425: (2.55850569s)
--- PASS: TestCertExpiration (212.55s)

                                                
                                    
x
+
TestForceSystemdFlag (27.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-328756 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-328756 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.152488337s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-328756 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-328756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-328756
E1018 12:11:21.325213    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-328756: (5.022762415s)
--- PASS: TestForceSystemdFlag (27.46s)

                                                
                                    
x
+
TestForceSystemdEnv (31.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-297456 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-297456 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.564777863s)
helpers_test.go:175: Cleaning up "force-systemd-env-297456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-297456
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-297456: (5.698890428s)
--- PASS: TestForceSystemdEnv (31.26s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.56s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1018 12:10:24.425582    9360 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1018 12:10:24.425810    9360 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3305753213/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 12:10:24.463552    9360 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3305753213/001/docker-machine-driver-kvm2 version is 1.1.1
W1018 12:10:24.463611    9360 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1018 12:10:24.463775    9360 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1018 12:10:24.463829    9360 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3305753213/001/docker-machine-driver-kvm2
I1018 12:10:24.829173    9360 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3305753213/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 12:10:24.850255    9360 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3305753213/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.56s)

                                                
                                    
x
+
TestErrorSpam/setup (22.49s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-928008 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-928008 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-928008 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-928008 --driver=docker  --container-runtime=crio: (22.488961039s)
--- PASS: TestErrorSpam/setup (22.49s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 status
--- PASS: TestErrorSpam/status (0.90s)

                                                
                                    
x
+
TestErrorSpam/pause (6.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 pause: exit status 80 (2.061633408s)

                                                
                                                
-- stdout --
	* Pausing node nospam-928008 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:34:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 pause: exit status 80 (1.872820233s)

                                                
                                                
-- stdout --
	* Pausing node nospam-928008 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:34:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 pause: exit status 80 (2.18988095s)

                                                
                                                
-- stdout --
	* Pausing node nospam-928008 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:34:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (4.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 unpause: exit status 80 (1.432170529s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-928008 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:34:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 unpause: exit status 80 (1.453467192s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-928008 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:34:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 unpause: exit status 80 (1.656032745s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-928008 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T11:34:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (4.54s)

                                                
                                    
x
+
TestErrorSpam/stop (2.56s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 stop: (2.378210146s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928008 --log_dir /tmp/nospam-928008 stop
--- PASS: TestErrorSpam/stop (2.56s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21647-5865/.minikube/files/etc/test/nested/copy/9360/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.23s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-874021 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-874021 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m9.230744299s)
--- PASS: TestFunctional/serial/StartWithProxy (69.23s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1018 11:36:13.519499    9360 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-874021 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-874021 --alsologtostderr -v=8: (6.182498676s)
functional_test.go:678: soft start took 6.183307855s for "functional-874021" cluster.
I1018 11:36:19.702438    9360 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-874021 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 cache add registry.k8s.io/pause:3.3
E1018 11:36:21.326019    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:36:21.332460    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:36:21.343909    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:36:21.365343    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:36:21.406710    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:36:21.488123    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:36:21.649678    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 cache add registry.k8s.io/pause:latest
E1018 11:36:21.971041    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:36:22.612983    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-874021 /tmp/TestFunctionalserialCacheCmdcacheadd_local2386159791/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 cache add minikube-local-cache-test:functional-874021
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 cache delete minikube-local-cache-test:functional-874021
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-874021
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh sudo crictl rmi registry.k8s.io/pause:latest
E1018 11:36:23.894543    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874021 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (266.026868ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 kubectl -- --context functional-874021 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-874021 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (68.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-874021 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1018 11:36:26.456753    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:36:31.578696    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:36:41.820334    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:37:02.301707    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-874021 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m8.377472447s)
functional_test.go:776: restart took 1m8.377572651s for "functional-874021" cluster.
I1018 11:37:33.991237    9360 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (68.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-874021 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-874021 logs: (1.200970051s)
--- PASS: TestFunctional/serial/LogsCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 logs --file /tmp/TestFunctionalserialLogsFileCmd768729221/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-874021 logs --file /tmp/TestFunctionalserialLogsFileCmd768729221/001/logs.txt: (1.207705421s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.93s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-874021 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-874021
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-874021: exit status 115 (330.94745ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32037 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-874021 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874021 config get cpus: exit status 14 (59.451106ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874021 config get cpus: exit status 14 (44.378959ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-874021 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-874021 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 48657: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-874021 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-874021 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (153.722585ms)

                                                
                                                
-- stdout --
	* [functional-874021] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:38:08.445296   47506 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:38:08.445551   47506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:38:08.445560   47506 out.go:374] Setting ErrFile to fd 2...
	I1018 11:38:08.445564   47506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:38:08.445753   47506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:38:08.446172   47506 out.go:368] Setting JSON to false
	I1018 11:38:08.447137   47506 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1236,"bootTime":1760786252,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 11:38:08.447205   47506 start.go:141] virtualization: kvm guest
	I1018 11:38:08.450897   47506 out.go:179] * [functional-874021] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 11:38:08.452057   47506 notify.go:220] Checking for updates...
	I1018 11:38:08.453934   47506 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 11:38:08.455250   47506 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:38:08.456445   47506 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 11:38:08.457577   47506 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 11:38:08.458655   47506 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 11:38:08.459794   47506 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 11:38:08.461442   47506 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:38:08.462169   47506 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:38:08.486163   47506 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 11:38:08.486262   47506 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:38:08.544754   47506 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-18 11:38:08.534787202 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 11:38:08.544886   47506 docker.go:318] overlay module found
	I1018 11:38:08.546631   47506 out.go:179] * Using the docker driver based on existing profile
	I1018 11:38:08.547812   47506 start.go:305] selected driver: docker
	I1018 11:38:08.547833   47506 start.go:925] validating driver "docker" against &{Name:functional-874021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-874021 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 11:38:08.547922   47506 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 11:38:08.549609   47506 out.go:203] 
	W1018 11:38:08.550727   47506 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1018 11:38:08.551852   47506 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-874021 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-874021 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-874021 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (151.303493ms)

                                                
                                                
-- stdout --
	* [functional-874021] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:38:08.809978   47729 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:38:08.810113   47729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:38:08.810122   47729 out.go:374] Setting ErrFile to fd 2...
	I1018 11:38:08.810126   47729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:38:08.810441   47729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:38:08.810902   47729 out.go:368] Setting JSON to false
	I1018 11:38:08.811937   47729 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1237,"bootTime":1760786252,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 11:38:08.812027   47729 start.go:141] virtualization: kvm guest
	I1018 11:38:08.813725   47729 out.go:179] * [functional-874021] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1018 11:38:08.815404   47729 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 11:38:08.815470   47729 notify.go:220] Checking for updates...
	I1018 11:38:08.817577   47729 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 11:38:08.818677   47729 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 11:38:08.819892   47729 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 11:38:08.823367   47729 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 11:38:08.824678   47729 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 11:38:08.826553   47729 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:38:08.827082   47729 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 11:38:08.850363   47729 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 11:38:08.850444   47729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:38:08.905514   47729 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-18 11:38:08.895667342 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 11:38:08.905622   47729 docker.go:318] overlay module found
	I1018 11:38:08.907878   47729 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1018 11:38:08.908967   47729 start.go:305] selected driver: docker
	I1018 11:38:08.908985   47729 start.go:925] validating driver "docker" against &{Name:functional-874021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-874021 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 11:38:08.909089   47729 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 11:38:08.910697   47729 out.go:203] 
	W1018 11:38:08.911902   47729 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 11:38:08.912846   47729 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [ec5c79fe-1bcf-4546-8d34-de800453a9c1] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003134175s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-874021 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-874021 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-874021 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-874021 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [5a3de438-fe32-42a1-8c4c-d8a82763881a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [5a3de438-fe32-42a1-8c4c-d8a82763881a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003961442s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-874021 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-874021 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-874021 apply -f testdata/storage-provisioner/pod.yaml
I1018 11:38:05.600119    9360 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4679382f-2b32-4474-87f6-f2ab699e6cce] Pending
helpers_test.go:352: "sp-pod" [4679382f-2b32-4474-87f6-f2ab699e6cce] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003642783s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-874021 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.69s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "echo hello"
E1018 11:37:43.263977    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh -n functional-874021 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 cp functional-874021:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4132200023/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh -n functional-874021 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh -n functional-874021 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (14.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-874021 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-r7f94" [89b59b9a-453c-46a2-8089-ba20704a1b45] Pending
helpers_test.go:352: "mysql-5bb876957f-r7f94" [89b59b9a-453c-46a2-8089-ba20704a1b45] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-r7f94" [89b59b9a-453c-46a2-8089-ba20704a1b45] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 12.003631925s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-874021 exec mysql-5bb876957f-r7f94 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-874021 exec mysql-5bb876957f-r7f94 -- mysql -ppassword -e "show databases;": exit status 1 (91.0103ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 11:37:52.709330    9360 retry.go:31] will retry after 578.937722ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-874021 exec mysql-5bb876957f-r7f94 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-874021 exec mysql-5bb876957f-r7f94 -- mysql -ppassword -e "show databases;": exit status 1 (87.030065ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 11:37:53.376414    9360 retry.go:31] will retry after 1.494539927s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-874021 exec mysql-5bb876957f-r7f94 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (14.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9360/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "sudo cat /etc/test/nested/copy/9360/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9360.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "sudo cat /etc/ssl/certs/9360.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9360.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "sudo cat /usr/share/ca-certificates/9360.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/93602.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "sudo cat /etc/ssl/certs/93602.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/93602.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "sudo cat /usr/share/ca-certificates/93602.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-874021 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874021 ssh "sudo systemctl is-active docker": exit status 1 (297.785978ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874021 ssh "sudo systemctl is-active containerd": exit status 1 (296.315504ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-874021 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-874021 image ls --format short --alsologtostderr:
I1018 11:38:13.713705   49047 out.go:360] Setting OutFile to fd 1 ...
I1018 11:38:13.714084   49047 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:38:13.714102   49047 out.go:374] Setting ErrFile to fd 2...
I1018 11:38:13.714109   49047 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:38:13.714476   49047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
I1018 11:38:13.715498   49047 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:38:13.715656   49047 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:38:13.716222   49047 cli_runner.go:164] Run: docker container inspect functional-874021 --format={{.State.Status}}
I1018 11:38:13.739672   49047 ssh_runner.go:195] Run: systemctl --version
I1018 11:38:13.739741   49047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-874021
I1018 11:38:13.763384   49047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/functional-874021/id_rsa Username:docker}
I1018 11:38:13.869395   49047 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-874021 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-874021 image ls --format table --alsologtostderr:
I1018 11:38:14.607217   49197 out.go:360] Setting OutFile to fd 1 ...
I1018 11:38:14.607793   49197 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:38:14.607807   49197 out.go:374] Setting ErrFile to fd 2...
I1018 11:38:14.607813   49197 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:38:14.609845   49197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
I1018 11:38:14.610659   49197 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:38:14.610830   49197 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:38:14.611405   49197 cli_runner.go:164] Run: docker container inspect functional-874021 --format={{.State.Status}}
I1018 11:38:14.634211   49197 ssh_runner.go:195] Run: systemctl --version
I1018 11:38:14.634266   49197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-874021
I1018 11:38:14.656672   49197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/functional-874021/id_rsa Username:docker}
I1018 11:38:14.763387   49197 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-874021 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa
902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io
/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7
c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615
579"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b
7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-874021 image ls --format json --alsologtostderr:
I1018 11:38:14.375646   49118 out.go:360] Setting OutFile to fd 1 ...
I1018 11:38:14.375753   49118 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:38:14.375774   49118 out.go:374] Setting ErrFile to fd 2...
I1018 11:38:14.375780   49118 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:38:14.376169   49118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
I1018 11:38:14.376844   49118 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:38:14.376948   49118 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:38:14.377376   49118 cli_runner.go:164] Run: docker container inspect functional-874021 --format={{.State.Status}}
I1018 11:38:14.397663   49118 ssh_runner.go:195] Run: systemctl --version
I1018 11:38:14.397722   49118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-874021
I1018 11:38:14.417796   49118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/functional-874021/id_rsa Username:docker}
I1018 11:38:14.516617   49118 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-874021 image ls --format yaml --alsologtostderr:
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-874021 image ls --format yaml --alsologtostderr:
I1018 11:38:14.863636   49264 out.go:360] Setting OutFile to fd 1 ...
I1018 11:38:14.863960   49264 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:38:14.863974   49264 out.go:374] Setting ErrFile to fd 2...
I1018 11:38:14.863980   49264 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:38:14.864274   49264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
I1018 11:38:14.864945   49264 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:38:14.865073   49264 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:38:14.865508   49264 cli_runner.go:164] Run: docker container inspect functional-874021 --format={{.State.Status}}
I1018 11:38:14.885680   49264 ssh_runner.go:195] Run: systemctl --version
I1018 11:38:14.885749   49264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-874021
I1018 11:38:14.908300   49264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/functional-874021/id_rsa Username:docker}
I1018 11:38:15.014997   49264 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874021 ssh pgrep buildkitd: exit status 1 (264.451324ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image build -t localhost/my-image:functional-874021 testdata/build --alsologtostderr
2025/10/18 11:38:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-874021 image build -t localhost/my-image:functional-874021 testdata/build --alsologtostderr: (2.378419647s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-874021 image build -t localhost/my-image:functional-874021 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ab047e0aa54
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-874021
--> 7e87e6d7827
Successfully tagged localhost/my-image:functional-874021
7e87e6d7827b0ae7a57a71b069347ad978e7a223682d1f414d6ce34bd38f92de
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-874021 image build -t localhost/my-image:functional-874021 testdata/build --alsologtostderr:
I1018 11:38:15.419504   49464 out.go:360] Setting OutFile to fd 1 ...
I1018 11:38:15.419755   49464 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:38:15.419791   49464 out.go:374] Setting ErrFile to fd 2...
I1018 11:38:15.419797   49464 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 11:38:15.420033   49464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
I1018 11:38:15.420605   49464 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:38:15.421221   49464 config.go:182] Loaded profile config "functional-874021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 11:38:15.421601   49464 cli_runner.go:164] Run: docker container inspect functional-874021 --format={{.State.Status}}
I1018 11:38:15.439357   49464 ssh_runner.go:195] Run: systemctl --version
I1018 11:38:15.439410   49464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-874021
I1018 11:38:15.456590   49464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/functional-874021/id_rsa Username:docker}
I1018 11:38:15.553341   49464 build_images.go:161] Building image from path: /tmp/build.639063149.tar
I1018 11:38:15.553415   49464 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1018 11:38:15.561446   49464 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.639063149.tar
I1018 11:38:15.565326   49464 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.639063149.tar: stat -c "%s %y" /var/lib/minikube/build/build.639063149.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.639063149.tar': No such file or directory
I1018 11:38:15.565370   49464 ssh_runner.go:362] scp /tmp/build.639063149.tar --> /var/lib/minikube/build/build.639063149.tar (3072 bytes)
I1018 11:38:15.583543   49464 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.639063149
I1018 11:38:15.591317   49464 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.639063149 -xf /var/lib/minikube/build/build.639063149.tar
I1018 11:38:15.599579   49464 crio.go:315] Building image: /var/lib/minikube/build/build.639063149
I1018 11:38:15.599660   49464 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-874021 /var/lib/minikube/build/build.639063149 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1018 11:38:17.733457   49464 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-874021 /var/lib/minikube/build/build.639063149 --cgroup-manager=cgroupfs: (2.133772831s)
I1018 11:38:17.733518   49464 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.639063149
I1018 11:38:17.741558   49464 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.639063149.tar
I1018 11:38:17.749002   49464 build_images.go:217] Built localhost/my-image:functional-874021 from /tmp/build.639063149.tar
I1018 11:38:17.749040   49464 build_images.go:133] succeeded building to: functional-874021
I1018 11:38:17.749045   49464 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image ls
E1018 11:39:05.185874    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:41:21.325791    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:41:49.028027    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:46:21.325412    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-874021
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-874021 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-874021 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-874021 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-874021 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 42723: os: process already finished
helpers_test.go:519: unable to terminate pid 42466: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-874021 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-874021 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [e86ff881-02ab-4d20-8d8c-4c0f8732382f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [e86ff881-02ab-4d20-8d8c-4c0f8732382f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.03337425s
I1018 11:37:55.344719    9360 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image rm kicbase/echo-server:functional-874021 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-874021 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
I1018 11:37:55.402966    9360 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.99.15 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-874021 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "380.624441ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "60.828009ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "389.248971ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "54.956141ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-874021 /tmp/TestFunctionalparallelMountCmdany-port3181564668/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760787477889163006" to /tmp/TestFunctionalparallelMountCmdany-port3181564668/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760787477889163006" to /tmp/TestFunctionalparallelMountCmdany-port3181564668/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760787477889163006" to /tmp/TestFunctionalparallelMountCmdany-port3181564668/001/test-1760787477889163006
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874021 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (283.960221ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 11:37:58.173440    9360 retry.go:31] will retry after 680.744596ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 18 11:37 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 18 11:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 18 11:37 test-1760787477889163006
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh cat /mount-9p/test-1760787477889163006
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-874021 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [f872e44c-78c4-408c-9357-d9acfc1bf07b] Pending
helpers_test.go:352: "busybox-mount" [f872e44c-78c4-408c-9357-d9acfc1bf07b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [f872e44c-78c4-408c-9357-d9acfc1bf07b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [f872e44c-78c4-408c-9357-d9acfc1bf07b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003313517s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-874021 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-874021 /tmp/TestFunctionalparallelMountCmdany-port3181564668/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-874021 /tmp/TestFunctionalparallelMountCmdspecific-port4120149293/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874021 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (272.352257ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 11:38:04.970089    9360 retry.go:31] will retry after 706.460517ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-874021 /tmp/TestFunctionalparallelMountCmdspecific-port4120149293/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874021 ssh "sudo umount -f /mount-9p": exit status 1 (255.557177ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-874021 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-874021 /tmp/TestFunctionalparallelMountCmdspecific-port4120149293/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-874021 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3781982942/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-874021 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3781982942/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-874021 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3781982942/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874021 ssh "findmnt -T" /mount1: exit status 1 (325.040134ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 11:38:06.978380    9360 retry.go:31] will retry after 604.183933ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-874021 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-874021 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3781982942/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-874021 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3781982942/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-874021 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3781982942/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-874021 service list: (1.694768155s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-874021 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-874021 service list -o json: (1.690096572s)
functional_test.go:1504: Took "1.690180484s" to run "out/minikube-linux-amd64 -p functional-874021 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-874021
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-874021
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-874021
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (149.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-331043 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m28.702807265s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (149.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-331043 kubectl -- rollout status deployment/busybox: (2.56893137s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- exec busybox-7b57f96db7-7jlgt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- exec busybox-7b57f96db7-9qwn5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- exec busybox-7b57f96db7-gch94 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- exec busybox-7b57f96db7-7jlgt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- exec busybox-7b57f96db7-9qwn5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- exec busybox-7b57f96db7-gch94 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- exec busybox-7b57f96db7-7jlgt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- exec busybox-7b57f96db7-9qwn5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- exec busybox-7b57f96db7-gch94 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- exec busybox-7b57f96db7-7jlgt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- exec busybox-7b57f96db7-7jlgt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- exec busybox-7b57f96db7-9qwn5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- exec busybox-7b57f96db7-9qwn5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- exec busybox-7b57f96db7-gch94 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 kubectl -- exec busybox-7b57f96db7-gch94 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 node add --alsologtostderr -v 5
E1018 11:51:21.326220    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-331043 node add --alsologtostderr -v 5: (53.748810678s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-331043 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp testdata/cp-test.txt ha-331043:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp ha-331043:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2041609972/001/cp-test_ha-331043.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp ha-331043:/home/docker/cp-test.txt ha-331043-m02:/home/docker/cp-test_ha-331043_ha-331043-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m02 "sudo cat /home/docker/cp-test_ha-331043_ha-331043-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp ha-331043:/home/docker/cp-test.txt ha-331043-m03:/home/docker/cp-test_ha-331043_ha-331043-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m03 "sudo cat /home/docker/cp-test_ha-331043_ha-331043-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp ha-331043:/home/docker/cp-test.txt ha-331043-m04:/home/docker/cp-test_ha-331043_ha-331043-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m04 "sudo cat /home/docker/cp-test_ha-331043_ha-331043-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp testdata/cp-test.txt ha-331043-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp ha-331043-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2041609972/001/cp-test_ha-331043-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp ha-331043-m02:/home/docker/cp-test.txt ha-331043:/home/docker/cp-test_ha-331043-m02_ha-331043.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043 "sudo cat /home/docker/cp-test_ha-331043-m02_ha-331043.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp ha-331043-m02:/home/docker/cp-test.txt ha-331043-m03:/home/docker/cp-test_ha-331043-m02_ha-331043-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m03 "sudo cat /home/docker/cp-test_ha-331043-m02_ha-331043-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp ha-331043-m02:/home/docker/cp-test.txt ha-331043-m04:/home/docker/cp-test_ha-331043-m02_ha-331043-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m04 "sudo cat /home/docker/cp-test_ha-331043-m02_ha-331043-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp testdata/cp-test.txt ha-331043-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp ha-331043-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2041609972/001/cp-test_ha-331043-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp ha-331043-m03:/home/docker/cp-test.txt ha-331043:/home/docker/cp-test_ha-331043-m03_ha-331043.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043 "sudo cat /home/docker/cp-test_ha-331043-m03_ha-331043.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp ha-331043-m03:/home/docker/cp-test.txt ha-331043-m02:/home/docker/cp-test_ha-331043-m03_ha-331043-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m02 "sudo cat /home/docker/cp-test_ha-331043-m03_ha-331043-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp ha-331043-m03:/home/docker/cp-test.txt ha-331043-m04:/home/docker/cp-test_ha-331043-m03_ha-331043-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m04 "sudo cat /home/docker/cp-test_ha-331043-m03_ha-331043-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp testdata/cp-test.txt ha-331043-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp ha-331043-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2041609972/001/cp-test_ha-331043-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp ha-331043-m04:/home/docker/cp-test.txt ha-331043:/home/docker/cp-test_ha-331043-m04_ha-331043.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043 "sudo cat /home/docker/cp-test_ha-331043-m04_ha-331043.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp ha-331043-m04:/home/docker/cp-test.txt ha-331043-m02:/home/docker/cp-test_ha-331043-m04_ha-331043-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m02 "sudo cat /home/docker/cp-test_ha-331043-m04_ha-331043-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 cp ha-331043-m04:/home/docker/cp-test.txt ha-331043-m03:/home/docker/cp-test_ha-331043-m04_ha-331043-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 ssh -n ha-331043-m03 "sudo cat /home/docker/cp-test_ha-331043-m04_ha-331043-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-331043 node stop m02 --alsologtostderr -v 5: (19.085847006s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-331043 status --alsologtostderr -v 5: exit status 7 (682.619652ms)

                                                
                                                
-- stdout --
	ha-331043
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-331043-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-331043-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-331043-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:52:07.068445   73954 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:52:07.068703   73954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:52:07.068713   73954 out.go:374] Setting ErrFile to fd 2...
	I1018 11:52:07.068717   73954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:52:07.068960   73954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:52:07.069142   73954 out.go:368] Setting JSON to false
	I1018 11:52:07.069170   73954 mustload.go:65] Loading cluster: ha-331043
	I1018 11:52:07.069286   73954 notify.go:220] Checking for updates...
	I1018 11:52:07.069712   73954 config.go:182] Loaded profile config "ha-331043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:52:07.069732   73954 status.go:174] checking status of ha-331043 ...
	I1018 11:52:07.070245   73954 cli_runner.go:164] Run: docker container inspect ha-331043 --format={{.State.Status}}
	I1018 11:52:07.089649   73954 status.go:371] ha-331043 host status = "Running" (err=<nil>)
	I1018 11:52:07.089674   73954 host.go:66] Checking if "ha-331043" exists ...
	I1018 11:52:07.089933   73954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-331043
	I1018 11:52:07.108174   73954 host.go:66] Checking if "ha-331043" exists ...
	I1018 11:52:07.108458   73954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 11:52:07.108495   73954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-331043
	I1018 11:52:07.126301   73954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/ha-331043/id_rsa Username:docker}
	I1018 11:52:07.220667   73954 ssh_runner.go:195] Run: systemctl --version
	I1018 11:52:07.227146   73954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 11:52:07.240321   73954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 11:52:07.300098   73954 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-18 11:52:07.289571667 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 11:52:07.300591   73954 kubeconfig.go:125] found "ha-331043" server: "https://192.168.49.254:8443"
	I1018 11:52:07.300617   73954 api_server.go:166] Checking apiserver status ...
	I1018 11:52:07.300654   73954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 11:52:07.312583   73954 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1245/cgroup
	W1018 11:52:07.321236   73954 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1245/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 11:52:07.321291   73954 ssh_runner.go:195] Run: ls
	I1018 11:52:07.325157   73954 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 11:52:07.329147   73954 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 11:52:07.329168   73954 status.go:463] ha-331043 apiserver status = Running (err=<nil>)
	I1018 11:52:07.329177   73954 status.go:176] ha-331043 status: &{Name:ha-331043 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 11:52:07.329197   73954 status.go:174] checking status of ha-331043-m02 ...
	I1018 11:52:07.329461   73954 cli_runner.go:164] Run: docker container inspect ha-331043-m02 --format={{.State.Status}}
	I1018 11:52:07.347508   73954 status.go:371] ha-331043-m02 host status = "Stopped" (err=<nil>)
	I1018 11:52:07.347531   73954 status.go:384] host is not running, skipping remaining checks
	I1018 11:52:07.347539   73954 status.go:176] ha-331043-m02 status: &{Name:ha-331043-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 11:52:07.347563   73954 status.go:174] checking status of ha-331043-m03 ...
	I1018 11:52:07.347815   73954 cli_runner.go:164] Run: docker container inspect ha-331043-m03 --format={{.State.Status}}
	I1018 11:52:07.366120   73954 status.go:371] ha-331043-m03 host status = "Running" (err=<nil>)
	I1018 11:52:07.366145   73954 host.go:66] Checking if "ha-331043-m03" exists ...
	I1018 11:52:07.366447   73954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-331043-m03
	I1018 11:52:07.384204   73954 host.go:66] Checking if "ha-331043-m03" exists ...
	I1018 11:52:07.384538   73954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 11:52:07.384588   73954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-331043-m03
	I1018 11:52:07.403631   73954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/ha-331043-m03/id_rsa Username:docker}
	I1018 11:52:07.499260   73954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 11:52:07.512428   73954 kubeconfig.go:125] found "ha-331043" server: "https://192.168.49.254:8443"
	I1018 11:52:07.512460   73954 api_server.go:166] Checking apiserver status ...
	I1018 11:52:07.512501   73954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 11:52:07.523311   73954 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup
	W1018 11:52:07.531789   73954 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 11:52:07.531833   73954 ssh_runner.go:195] Run: ls
	I1018 11:52:07.535612   73954 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 11:52:07.539775   73954 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 11:52:07.539799   73954 status.go:463] ha-331043-m03 apiserver status = Running (err=<nil>)
	I1018 11:52:07.539808   73954 status.go:176] ha-331043-m03 status: &{Name:ha-331043-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 11:52:07.539821   73954 status.go:174] checking status of ha-331043-m04 ...
	I1018 11:52:07.540083   73954 cli_runner.go:164] Run: docker container inspect ha-331043-m04 --format={{.State.Status}}
	I1018 11:52:07.559002   73954 status.go:371] ha-331043-m04 host status = "Running" (err=<nil>)
	I1018 11:52:07.559026   73954 host.go:66] Checking if "ha-331043-m04" exists ...
	I1018 11:52:07.559265   73954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-331043-m04
	I1018 11:52:07.577915   73954 host.go:66] Checking if "ha-331043-m04" exists ...
	I1018 11:52:07.578186   73954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 11:52:07.578225   73954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-331043-m04
	I1018 11:52:07.595296   73954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/ha-331043-m04/id_rsa Username:docker}
	I1018 11:52:07.691104   73954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 11:52:07.703866   73954 status.go:176] ha-331043-m04 status: &{Name:ha-331043-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-331043 node start m02 --alsologtostderr -v 5: (8.338133818s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (111.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 stop --alsologtostderr -v 5
E1018 11:52:40.620288    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:52:40.626738    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:52:40.638222    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:52:40.659697    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:52:40.701118    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:52:40.782603    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:52:40.944668    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:52:41.266423    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:52:41.908663    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:52:43.191019    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:52:44.389908    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:52:45.753008    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:52:50.875324    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:53:01.116928    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-331043 stop --alsologtostderr -v 5: (51.016529381s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 start --wait true --alsologtostderr -v 5
E1018 11:53:21.598991    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 11:54:02.560690    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-331043 start --wait true --alsologtostderr -v 5: (1m0.165232313s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (111.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-331043 node delete m03 --alsologtostderr -v 5: (9.759448214s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (42.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-331043 stop --alsologtostderr -v 5: (42.753626947s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-331043 status --alsologtostderr -v 5: exit status 7 (102.625531ms)

                                                
                                                
-- stdout --
	ha-331043
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-331043-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-331043-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 11:55:03.874134   87922 out.go:360] Setting OutFile to fd 1 ...
	I1018 11:55:03.874381   87922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:55:03.874390   87922 out.go:374] Setting ErrFile to fd 2...
	I1018 11:55:03.874394   87922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 11:55:03.874604   87922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 11:55:03.874793   87922 out.go:368] Setting JSON to false
	I1018 11:55:03.874819   87922 mustload.go:65] Loading cluster: ha-331043
	I1018 11:55:03.874880   87922 notify.go:220] Checking for updates...
	I1018 11:55:03.875198   87922 config.go:182] Loaded profile config "ha-331043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 11:55:03.875211   87922 status.go:174] checking status of ha-331043 ...
	I1018 11:55:03.875642   87922 cli_runner.go:164] Run: docker container inspect ha-331043 --format={{.State.Status}}
	I1018 11:55:03.894177   87922 status.go:371] ha-331043 host status = "Stopped" (err=<nil>)
	I1018 11:55:03.894231   87922 status.go:384] host is not running, skipping remaining checks
	I1018 11:55:03.894248   87922 status.go:176] ha-331043 status: &{Name:ha-331043 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 11:55:03.894289   87922 status.go:174] checking status of ha-331043-m02 ...
	I1018 11:55:03.894676   87922 cli_runner.go:164] Run: docker container inspect ha-331043-m02 --format={{.State.Status}}
	I1018 11:55:03.912439   87922 status.go:371] ha-331043-m02 host status = "Stopped" (err=<nil>)
	I1018 11:55:03.912463   87922 status.go:384] host is not running, skipping remaining checks
	I1018 11:55:03.912471   87922 status.go:176] ha-331043-m02 status: &{Name:ha-331043-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 11:55:03.912495   87922 status.go:174] checking status of ha-331043-m04 ...
	I1018 11:55:03.912743   87922 cli_runner.go:164] Run: docker container inspect ha-331043-m04 --format={{.State.Status}}
	I1018 11:55:03.930720   87922 status.go:371] ha-331043-m04 host status = "Stopped" (err=<nil>)
	I1018 11:55:03.930768   87922 status.go:384] host is not running, skipping remaining checks
	I1018 11:55:03.930783   87922 status.go:176] ha-331043-m04 status: &{Name:ha-331043-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (42.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (57.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1018 11:55:24.483084    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-331043 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (56.932197248s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (57.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 node add --control-plane --alsologtostderr -v 5
E1018 11:56:21.325738    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-331043 node add --control-plane --alsologtostderr -v 5: (42.280040716s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-331043 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-508594 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-508594 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (40.071786847s)
--- PASS: TestJSONOutput/start/Command (40.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.96s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-508594 --output=json --user=testUser
E1018 11:57:40.615646    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-508594 --output=json --user=testUser: (7.958489094s)
--- PASS: TestJSONOutput/stop/Command (7.96s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-495021 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-495021 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (68.505928ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b535dcaa-7026-4db8-b615-5b5f6dc3c242","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-495021] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7dc702b-a17d-4d81-aa83-2c1244f89014","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21647"}}
	{"specversion":"1.0","id":"3dc388b6-1244-44b3-ab94-b47463114337","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e4fe885f-4404-4ff1-b63c-b3314e03c565","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig"}}
	{"specversion":"1.0","id":"df9eb776-fbbb-444b-bde7-1b075d03383e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube"}}
	{"specversion":"1.0","id":"c417286a-b328-4533-970d-a325fcc7fe40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"57febcdf-b8de-433d-b8bd-6f19b4adb545","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cbcab31a-d875-4c79-80bc-d4fb032471c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-495021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-495021
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.39s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-794307 --network=
E1018 11:58:08.324752    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-794307 --network=: (28.215638191s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-794307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-794307
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-794307: (2.158044949s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.39s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-665029 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-665029 --network=bridge: (22.058243938s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-665029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-665029
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-665029: (1.986718702s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.07s)

                                                
                                    
x
+
TestKicExistingNetwork (23.67s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1018 11:58:45.727320    9360 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1018 11:58:45.744196    9360 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1018 11:58:45.744279    9360 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1018 11:58:45.744299    9360 cli_runner.go:164] Run: docker network inspect existing-network
W1018 11:58:45.761368    9360 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1018 11:58:45.761402    9360 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1018 11:58:45.761422    9360 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1018 11:58:45.761589    9360 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1018 11:58:45.779376    9360 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1c78aef7d2ee IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:19:5a:10:36:f4} reservation:<nil>}
I1018 11:58:45.779730    9360 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c4f3a0}
I1018 11:58:45.779779    9360 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1018 11:58:45.779836    9360 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1018 11:58:45.836328    9360 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-353388 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-353388 --network=existing-network: (21.528727807s)
helpers_test.go:175: Cleaning up "existing-network-353388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-353388
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-353388: (1.995243271s)
I1018 11:59:09.379040    9360 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.67s)

                                                
                                    
x
+
TestKicCustomSubnet (27.61s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-622258 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-622258 --subnet=192.168.60.0/24: (25.423198571s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-622258 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-622258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-622258
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-622258: (2.165484973s)
--- PASS: TestKicCustomSubnet (27.61s)

                                                
                                    
x
+
TestKicStaticIP (26.55s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-391976 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-391976 --static-ip=192.168.200.200: (24.248955789s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-391976 ip
helpers_test.go:175: Cleaning up "static-ip-391976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-391976
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-391976: (2.16581069s)
--- PASS: TestKicStaticIP (26.55s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (48.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-880157 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-880157 --driver=docker  --container-runtime=crio: (20.532920417s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-882497 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-882497 --driver=docker  --container-runtime=crio: (21.594238794s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-880157
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-882497
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-882497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-882497
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-882497: (2.386427477s)
helpers_test.go:175: Cleaning up "first-880157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-880157
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-880157: (2.399707847s)
--- PASS: TestMinikubeProfile (48.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-360096 --memory=3072 --mount-string /tmp/TestMountStartserial4089258771/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-360096 --memory=3072 --mount-string /tmp/TestMountStartserial4089258771/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.527842271s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-360096 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-376063 --memory=3072 --mount-string /tmp/TestMountStartserial4089258771/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-376063 --memory=3072 --mount-string /tmp/TestMountStartserial4089258771/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.362835345s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-376063 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-360096 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-360096 --alsologtostderr -v=5: (1.702251042s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-376063 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-376063
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-376063: (1.246317307s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.35s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-376063
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-376063: (6.346551495s)
--- PASS: TestMountStart/serial/RestartStopped (7.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-376063 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (93.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-057399 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1018 12:01:21.325259    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:02:40.614661    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-057399 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m32.683982315s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (93.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057399 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057399 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-057399 -- rollout status deployment/busybox: (2.189513627s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057399 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057399 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057399 -- exec busybox-7b57f96db7-2rspl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057399 -- exec busybox-7b57f96db7-fqbcc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057399 -- exec busybox-7b57f96db7-2rspl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057399 -- exec busybox-7b57f96db7-fqbcc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057399 -- exec busybox-7b57f96db7-2rspl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057399 -- exec busybox-7b57f96db7-fqbcc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057399 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057399 -- exec busybox-7b57f96db7-2rspl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057399 -- exec busybox-7b57f96db7-2rspl -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057399 -- exec busybox-7b57f96db7-fqbcc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-057399 -- exec busybox-7b57f96db7-fqbcc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-057399 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-057399 -v=5 --alsologtostderr: (23.400857573s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-057399 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 cp testdata/cp-test.txt multinode-057399:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 cp multinode-057399:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3300834567/001/cp-test_multinode-057399.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 cp multinode-057399:/home/docker/cp-test.txt multinode-057399-m02:/home/docker/cp-test_multinode-057399_multinode-057399-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399-m02 "sudo cat /home/docker/cp-test_multinode-057399_multinode-057399-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 cp multinode-057399:/home/docker/cp-test.txt multinode-057399-m03:/home/docker/cp-test_multinode-057399_multinode-057399-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399-m03 "sudo cat /home/docker/cp-test_multinode-057399_multinode-057399-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 cp testdata/cp-test.txt multinode-057399-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 cp multinode-057399-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3300834567/001/cp-test_multinode-057399-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 cp multinode-057399-m02:/home/docker/cp-test.txt multinode-057399:/home/docker/cp-test_multinode-057399-m02_multinode-057399.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399 "sudo cat /home/docker/cp-test_multinode-057399-m02_multinode-057399.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 cp multinode-057399-m02:/home/docker/cp-test.txt multinode-057399-m03:/home/docker/cp-test_multinode-057399-m02_multinode-057399-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399-m03 "sudo cat /home/docker/cp-test_multinode-057399-m02_multinode-057399-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 cp testdata/cp-test.txt multinode-057399-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 cp multinode-057399-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3300834567/001/cp-test_multinode-057399-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 cp multinode-057399-m03:/home/docker/cp-test.txt multinode-057399:/home/docker/cp-test_multinode-057399-m03_multinode-057399.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399 "sudo cat /home/docker/cp-test_multinode-057399-m03_multinode-057399.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 cp multinode-057399-m03:/home/docker/cp-test.txt multinode-057399-m02:/home/docker/cp-test_multinode-057399-m03_multinode-057399-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 ssh -n multinode-057399-m02 "sudo cat /home/docker/cp-test_multinode-057399-m03_multinode-057399-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-057399 node stop m03: (1.252559645s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-057399 status: exit status 7 (495.412092ms)

                                                
                                                
-- stdout --
	multinode-057399
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-057399-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-057399-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-057399 status --alsologtostderr: exit status 7 (484.344075ms)

                                                
                                                
-- stdout --
	multinode-057399
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-057399-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-057399-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:03:28.985418  147664 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:03:28.985540  147664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:03:28.985547  147664 out.go:374] Setting ErrFile to fd 2...
	I1018 12:03:28.985552  147664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:03:28.985810  147664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:03:28.986011  147664 out.go:368] Setting JSON to false
	I1018 12:03:28.986046  147664 mustload.go:65] Loading cluster: multinode-057399
	I1018 12:03:28.986190  147664 notify.go:220] Checking for updates...
	I1018 12:03:28.986456  147664 config.go:182] Loaded profile config "multinode-057399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:03:28.986472  147664 status.go:174] checking status of multinode-057399 ...
	I1018 12:03:28.986948  147664 cli_runner.go:164] Run: docker container inspect multinode-057399 --format={{.State.Status}}
	I1018 12:03:29.005879  147664 status.go:371] multinode-057399 host status = "Running" (err=<nil>)
	I1018 12:03:29.005905  147664 host.go:66] Checking if "multinode-057399" exists ...
	I1018 12:03:29.006156  147664 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-057399
	I1018 12:03:29.023489  147664 host.go:66] Checking if "multinode-057399" exists ...
	I1018 12:03:29.023798  147664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:03:29.023841  147664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-057399
	I1018 12:03:29.040906  147664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/multinode-057399/id_rsa Username:docker}
	I1018 12:03:29.135070  147664 ssh_runner.go:195] Run: systemctl --version
	I1018 12:03:29.141442  147664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:03:29.153880  147664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:03:29.211963  147664 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-18 12:03:29.20036951 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:03:29.212542  147664 kubeconfig.go:125] found "multinode-057399" server: "https://192.168.67.2:8443"
	I1018 12:03:29.212573  147664 api_server.go:166] Checking apiserver status ...
	I1018 12:03:29.212615  147664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:03:29.224035  147664 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup
	W1018 12:03:29.232943  147664 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:03:29.233002  147664 ssh_runner.go:195] Run: ls
	I1018 12:03:29.237046  147664 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1018 12:03:29.241228  147664 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1018 12:03:29.241252  147664 status.go:463] multinode-057399 apiserver status = Running (err=<nil>)
	I1018 12:03:29.241268  147664 status.go:176] multinode-057399 status: &{Name:multinode-057399 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:03:29.241301  147664 status.go:174] checking status of multinode-057399-m02 ...
	I1018 12:03:29.241547  147664 cli_runner.go:164] Run: docker container inspect multinode-057399-m02 --format={{.State.Status}}
	I1018 12:03:29.259548  147664 status.go:371] multinode-057399-m02 host status = "Running" (err=<nil>)
	I1018 12:03:29.259574  147664 host.go:66] Checking if "multinode-057399-m02" exists ...
	I1018 12:03:29.259870  147664 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-057399-m02
	I1018 12:03:29.278586  147664 host.go:66] Checking if "multinode-057399-m02" exists ...
	I1018 12:03:29.278925  147664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:03:29.278962  147664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-057399-m02
	I1018 12:03:29.297661  147664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21647-5865/.minikube/machines/multinode-057399-m02/id_rsa Username:docker}
	I1018 12:03:29.392259  147664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:03:29.405225  147664 status.go:176] multinode-057399-m02 status: &{Name:multinode-057399-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:03:29.405271  147664 status.go:174] checking status of multinode-057399-m03 ...
	I1018 12:03:29.405560  147664 cli_runner.go:164] Run: docker container inspect multinode-057399-m03 --format={{.State.Status}}
	I1018 12:03:29.423781  147664 status.go:371] multinode-057399-m03 host status = "Stopped" (err=<nil>)
	I1018 12:03:29.423806  147664 status.go:384] host is not running, skipping remaining checks
	I1018 12:03:29.423812  147664 status.go:176] multinode-057399-m03 status: &{Name:multinode-057399-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-057399 node start m03 -v=5 --alsologtostderr: (6.54784761s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-057399
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-057399
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-057399: (29.522610373s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-057399 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-057399 --wait=true -v=5 --alsologtostderr: (51.931510062s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-057399
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-057399 node delete m03: (4.631390112s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-057399 stop: (30.118956245s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-057399 status: exit status 7 (87.049634ms)

                                                
                                                
-- stdout --
	multinode-057399
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-057399-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-057399 status --alsologtostderr: exit status 7 (86.419952ms)

                                                
                                                
-- stdout --
	multinode-057399
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-057399-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:05:33.682326  157361 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:05:33.682615  157361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:05:33.682626  157361 out.go:374] Setting ErrFile to fd 2...
	I1018 12:05:33.682633  157361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:05:33.682834  157361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:05:33.683058  157361 out.go:368] Setting JSON to false
	I1018 12:05:33.683091  157361 mustload.go:65] Loading cluster: multinode-057399
	I1018 12:05:33.683132  157361 notify.go:220] Checking for updates...
	I1018 12:05:33.683496  157361 config.go:182] Loaded profile config "multinode-057399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:05:33.683514  157361 status.go:174] checking status of multinode-057399 ...
	I1018 12:05:33.683967  157361 cli_runner.go:164] Run: docker container inspect multinode-057399 --format={{.State.Status}}
	I1018 12:05:33.702548  157361 status.go:371] multinode-057399 host status = "Stopped" (err=<nil>)
	I1018 12:05:33.702573  157361 status.go:384] host is not running, skipping remaining checks
	I1018 12:05:33.702580  157361 status.go:176] multinode-057399 status: &{Name:multinode-057399 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:05:33.702624  157361 status.go:174] checking status of multinode-057399-m02 ...
	I1018 12:05:33.702898  157361 cli_runner.go:164] Run: docker container inspect multinode-057399-m02 --format={{.State.Status}}
	I1018 12:05:33.722094  157361 status.go:371] multinode-057399-m02 host status = "Stopped" (err=<nil>)
	I1018 12:05:33.722120  157361 status.go:384] host is not running, skipping remaining checks
	I1018 12:05:33.722127  157361 status.go:176] multinode-057399-m02 status: &{Name:multinode-057399-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-057399 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1018 12:06:21.325885    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-057399 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (49.879083564s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-057399 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.46s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-057399
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-057399-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-057399-m02 --driver=docker  --container-runtime=crio: exit status 14 (63.447668ms)

                                                
                                                
-- stdout --
	* [multinode-057399-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-057399-m02' is duplicated with machine name 'multinode-057399-m02' in profile 'multinode-057399'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-057399-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-057399-m03 --driver=docker  --container-runtime=crio: (21.221513079s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-057399
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-057399: exit status 80 (269.995333ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-057399 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-057399-m03 already exists in multinode-057399-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-057399-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-057399-m03: (2.362633444s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.96s)

                                                
                                    
x
+
TestPreload (83.93s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-540321 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-540321 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (45.895259037s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-540321 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-540321 image pull gcr.io/k8s-minikube/busybox: (1.516254404s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-540321
E1018 12:07:40.614881    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-540321: (5.822771464s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-540321 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-540321 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (28.059618556s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-540321 image list
helpers_test.go:175: Cleaning up "test-preload-540321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-540321
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-540321: (2.426177448s)
--- PASS: TestPreload (83.93s)

                                                
                                    
x
+
TestScheduledStopUnix (96.3s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-844733 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-844733 --memory=3072 --driver=docker  --container-runtime=crio: (20.539896263s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-844733 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-844733 -n scheduled-stop-844733
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-844733 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1018 12:08:37.214268    9360 retry.go:31] will retry after 52.749µs: open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/scheduled-stop-844733/pid: no such file or directory
I1018 12:08:37.215433    9360 retry.go:31] will retry after 148.106µs: open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/scheduled-stop-844733/pid: no such file or directory
I1018 12:08:37.216602    9360 retry.go:31] will retry after 333.255µs: open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/scheduled-stop-844733/pid: no such file or directory
I1018 12:08:37.217718    9360 retry.go:31] will retry after 298.655µs: open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/scheduled-stop-844733/pid: no such file or directory
I1018 12:08:37.218818    9360 retry.go:31] will retry after 468.027µs: open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/scheduled-stop-844733/pid: no such file or directory
I1018 12:08:37.219956    9360 retry.go:31] will retry after 742.341µs: open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/scheduled-stop-844733/pid: no such file or directory
I1018 12:08:37.221088    9360 retry.go:31] will retry after 1.672747ms: open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/scheduled-stop-844733/pid: no such file or directory
I1018 12:08:37.223319    9360 retry.go:31] will retry after 1.100372ms: open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/scheduled-stop-844733/pid: no such file or directory
I1018 12:08:37.225505    9360 retry.go:31] will retry after 1.868859ms: open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/scheduled-stop-844733/pid: no such file or directory
I1018 12:08:37.227718    9360 retry.go:31] will retry after 2.158342ms: open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/scheduled-stop-844733/pid: no such file or directory
I1018 12:08:37.230923    9360 retry.go:31] will retry after 2.968162ms: open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/scheduled-stop-844733/pid: no such file or directory
I1018 12:08:37.234121    9360 retry.go:31] will retry after 7.027462ms: open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/scheduled-stop-844733/pid: no such file or directory
I1018 12:08:37.241272    9360 retry.go:31] will retry after 10.778344ms: open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/scheduled-stop-844733/pid: no such file or directory
I1018 12:08:37.252560    9360 retry.go:31] will retry after 26.974344ms: open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/scheduled-stop-844733/pid: no such file or directory
I1018 12:08:37.279863    9360 retry.go:31] will retry after 42.87941ms: open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/scheduled-stop-844733/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-844733 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-844733 -n scheduled-stop-844733
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-844733
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-844733 --schedule 15s
E1018 12:09:03.690782    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1018 12:09:24.393569    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-844733
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-844733: exit status 7 (73.397891ms)

                                                
                                                
-- stdout --
	scheduled-stop-844733
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-844733 -n scheduled-stop-844733
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-844733 -n scheduled-stop-844733: exit status 7 (72.689796ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-844733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-844733
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-844733: (4.36854384s)
--- PASS: TestScheduledStopUnix (96.30s)

                                                
                                    
x
+
TestInsufficientStorage (12.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-729429 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-729429 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.259987825s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1b12ae48-9d37-42e8-bd0b-45b55af136d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-729429] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"675ee952-f6d1-4e1f-8b8b-b4890a1776c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21647"}}
	{"specversion":"1.0","id":"d47879c1-dfd4-424a-949b-545ef0f09b64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"819dab7c-8579-4518-9e6d-5fcd248e2038","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig"}}
	{"specversion":"1.0","id":"784b1ae9-d2ef-4fe2-8b4b-8a84f8e6561b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube"}}
	{"specversion":"1.0","id":"4b8b1760-9f77-43b1-9eac-021000b864f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b97a8eb6-7519-42c4-a224-06b8940fae6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d05ef5a4-567b-4f88-8898-765834c17dce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f30f5146-56ba-4292-9e53-37644a14c7d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e21fd625-b689-4444-b4f8-4551c0056599","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2cf96804-be77-4008-9a70-321d5f5a4462","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ed583e5f-62f9-45a1-a6b7-bd30ed5ceb55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-729429\" primary control-plane node in \"insufficient-storage-729429\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"352c7acc-94fb-4951-8a21-90f3868c1cdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"99d742c2-a866-443c-9e29-e9da05a8689d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b1674ebe-d629-4306-891c-8dfda9e38371","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-729429 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-729429 --output=json --layout=cluster: exit status 7 (280.186333ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-729429","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-729429","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 12:10:03.079495  177592 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-729429" does not appear in /home/jenkins/minikube-integration/21647-5865/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-729429 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-729429 --output=json --layout=cluster: exit status 7 (286.762852ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-729429","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-729429","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 12:10:03.366621  177705 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-729429" does not appear in /home/jenkins/minikube-integration/21647-5865/kubeconfig
	E1018 12:10:03.377420  177705 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/insufficient-storage-729429/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-729429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-729429
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-729429: (1.904521303s)
--- PASS: TestInsufficientStorage (12.73s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (46.72s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1409296649 start -p running-upgrade-054724 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1018 12:12:40.615974    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1409296649 start -p running-upgrade-054724 --memory=3072 --vm-driver=docker  --container-runtime=crio: (20.277996106s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-054724 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-054724 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.390582407s)
helpers_test.go:175: Cleaning up "running-upgrade-054724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-054724
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-054724: (2.567235544s)
--- PASS: TestRunningBinaryUpgrade (46.72s)

                                                
                                    
x
+
TestKubernetesUpgrade (318.56s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-291565 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-291565 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.231938764s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-291565
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-291565: (2.397531961s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-291565 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-291565 status --format={{.Host}}: exit status 7 (109.262496ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-291565 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-291565 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.728941524s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-291565 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-291565 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-291565 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (81.990344ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-291565] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-291565
	    minikube start -p kubernetes-upgrade-291565 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2915652 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-291565 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-291565 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-291565 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.911932299s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-291565" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-291565
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-291565: (3.029461305s)
--- PASS: TestKubernetesUpgrade (318.56s)

                                                
                                    
x
+
TestMissingContainerUpgrade (98.09s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.4074162023 start -p missing-upgrade-306315 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.4074162023 start -p missing-upgrade-306315 --memory=3072 --driver=docker  --container-runtime=crio: (46.273852129s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-306315
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-306315: (3.878328572s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-306315
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-306315 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-306315 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.832567591s)
helpers_test.go:175: Cleaning up "missing-upgrade-306315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-306315
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-306315: (2.45868584s)
--- PASS: TestMissingContainerUpgrade (98.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (10.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-376567 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-376567 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (224.951222ms)

                                                
                                                
-- stdout --
	* [false-376567] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:10:09.420106  179713 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:10:09.420389  179713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:10:09.420397  179713 out.go:374] Setting ErrFile to fd 2...
	I1018 12:10:09.420402  179713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:10:09.420740  179713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-5865/.minikube/bin
	I1018 12:10:09.421526  179713 out.go:368] Setting JSON to false
	I1018 12:10:09.422942  179713 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3157,"bootTime":1760786252,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 12:10:09.423142  179713 start.go:141] virtualization: kvm guest
	I1018 12:10:09.425646  179713 out.go:179] * [false-376567] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 12:10:09.428065  179713 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:10:09.428345  179713 notify.go:220] Checking for updates...
	I1018 12:10:09.431867  179713 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:10:09.437597  179713 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	I1018 12:10:09.439399  179713 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	I1018 12:10:09.440932  179713 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 12:10:09.442298  179713 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:10:09.445870  179713 config.go:182] Loaded profile config "kubernetes-upgrade-291565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 12:10:09.446029  179713 config.go:182] Loaded profile config "offline-crio-285533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:10:09.446139  179713 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:10:09.479575  179713 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 12:10:09.479837  179713 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:10:09.565961  179713 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 12:10:09.551651082 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 12:10:09.566107  179713 docker.go:318] overlay module found
	I1018 12:10:09.570636  179713 out.go:179] * Using the docker driver based on user configuration
	I1018 12:10:09.572005  179713 start.go:305] selected driver: docker
	I1018 12:10:09.572031  179713 start.go:925] validating driver "docker" against <nil>
	I1018 12:10:09.572048  179713 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:10:09.574207  179713 out.go:203] 
	W1018 12:10:09.575968  179713 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1018 12:10:09.577368  179713 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-376567 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-376567

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-376567

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-376567

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-376567

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-376567

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-376567

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-376567

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-376567

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-376567

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-376567

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-376567

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-376567" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-376567" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-376567

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376567"

                                                
                                                
----------------------- debugLogs end: false-376567 [took: 10.420867641s] --------------------------------
helpers_test.go:175: Cleaning up "false-376567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-376567
--- PASS: TestNetworkPlugins/group/false (10.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (48.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.497209820 start -p stopped-upgrade-881970 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.497209820 start -p stopped-upgrade-881970 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.661924302s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.497209820 -p stopped-upgrade-881970 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.497209820 -p stopped-upgrade-881970 stop: (11.884764097s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-881970 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-881970 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.905784481s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (48.45s)

                                                
                                    
x
+
TestPause/serial/Start (41.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-647824 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-647824 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (41.87592055s)
--- PASS: TestPause/serial/Start (41.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-881970
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-647824 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-647824 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.595714515s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-492996 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-492996 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (71.9532ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-492996] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-5865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-5865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (22.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-492996 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-492996 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.200119802s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-492996 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (22.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-376567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-376567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (44.164808093s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-492996 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-492996 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.287040208s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-492996 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-492996 status -o json: exit status 2 (291.554505ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-492996","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-492996
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-492996: (2.03204102s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-492996 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-492996 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.810917157s)
--- PASS: TestNoKubernetes/serial/Start (4.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-492996 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-492996 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.293633ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-492996
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-492996: (1.289241306s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-492996 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-492996 --driver=docker  --container-runtime=crio: (6.564833319s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-492996 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-492996 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.242038ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (40.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-376567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-376567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (40.277862831s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (40.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-376567 "pgrep -a kubelet"
I1018 12:14:06.974002    9360 config.go:182] Loaded profile config "auto-376567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-376567 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t7dng" [a5a6068c-f592-4120-bf9f-453fef6ea3a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t7dng" [a5a6068c-f592-4120-bf9f-453fef6ea3a2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004608992s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-376567 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-376567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-376567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-4zqdk" [0fcb3725-3e83-480e-989b-10ef30a3ec0e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003946639s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-376567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-376567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.54503442s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-376567 "pgrep -a kubelet"
I1018 12:14:37.366046    9360 config.go:182] Loaded profile config "kindnet-376567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-376567 replace --force -f testdata/netcat-deployment.yaml
I1018 12:14:37.759187    9360 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1018 12:14:37.810561    9360 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9s957" [f7df9a97-1b9c-434a-90cc-eb98a9237c85] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9s957" [f7df9a97-1b9c-434a-90cc-eb98a9237c85] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004070091s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-376567 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-376567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-376567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-376567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-376567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (54.356654632s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (40.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-376567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-376567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (40.673309845s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (40.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-376567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-376567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (47.817817937s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-b6d92" [5aa3c9ec-334b-4d8f-a419-ca054fe37d59] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-b6d92" [5aa3c9ec-334b-4d8f-a419-ca054fe37d59] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005626219s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-376567 "pgrep -a kubelet"
I1018 12:15:31.199500    9360 config.go:182] Loaded profile config "calico-376567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-376567 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d5b9m" [3c9f749f-59e8-4d1a-b9de-5aeb788aef87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d5b9m" [3c9f749f-59e8-4d1a-b9de-5aeb788aef87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.0037946s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-376567 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-376567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-376567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-376567 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-376567 "pgrep -a kubelet"
I1018 12:15:50.640668    9360 config.go:182] Loaded profile config "enable-default-cni-376567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-376567 replace --force -f testdata/netcat-deployment.yaml
I1018 12:15:50.869437    9360 config.go:182] Loaded profile config "custom-flannel-376567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zf6wd" [61d96632-b69e-41a4-8ab0-b61218dead71] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zf6wd" [61d96632-b69e-41a4-8ab0-b61218dead71] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.076237285s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-376567 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4qzzh" [093e334d-b3c5-4cb8-b4ba-ca2330c6fa59] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4qzzh" [093e334d-b3c5-4cb8-b4ba-ca2330c6fa59] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004735735s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-376567 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-376567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-376567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (37.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-376567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-376567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (37.245278451s)
--- PASS: TestNetworkPlugins/group/bridge/Start (37.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-376567 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-376567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-376567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-plcww" [2452681d-f85a-4b0c-8110-4d90c06201ac] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004573962s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-376567 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-376567 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lgwp8" [1e49a83e-1a0e-4a11-9817-f62b86da9326] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lgwp8" [1e49a83e-1a0e-4a11-9817-f62b86da9326] Running
E1018 12:16:21.325907    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/addons-162665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003724037s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (53.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-024443 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-024443 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (53.497468592s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (53.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-406541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-406541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (55.642457148s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-376567 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-376567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-376567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-376567 "pgrep -a kubelet"
I1018 12:16:38.655142    9360 config.go:182] Loaded profile config "bridge-376567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-376567 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-w9kdm" [f70538ed-5832-4dc9-ae5d-f98ad29cf473] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-w9kdm" [f70538ed-5832-4dc9-ae5d-f98ad29cf473] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003987059s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-376567 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-376567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-376567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)
E1018 12:19:07.275562    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/auto-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:07.281947    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/auto-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:07.293354    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/auto-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:07.314755    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/auto-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:07.356165    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/auto-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:07.438076    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/auto-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:07.600004    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/auto-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:07.922169    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/auto-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:08.563585    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/auto-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (70.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-175371 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-175371 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m10.644681897s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (70.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.456248885s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-024443 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [864f752a-d618-4c5e-8c15-67818c8295e2] Pending
helpers_test.go:352: "busybox" [864f752a-d618-4c5e-8c15-67818c8295e2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [864f752a-d618-4c5e-8c15-67818c8295e2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003434268s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-024443 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-406541 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f4ad8cbc-03d3-4f16-ab03-49d332b6fff3] Pending
helpers_test.go:352: "busybox" [f4ad8cbc-03d3-4f16-ab03-49d332b6fff3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f4ad8cbc-03d3-4f16-ab03-49d332b6fff3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004045424s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-406541 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-024443 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-024443 --alsologtostderr -v=3: (16.350821743s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-406541 --alsologtostderr -v=3
E1018 12:17:40.614676    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/functional-874021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-406541 --alsologtostderr -v=3: (16.305950035s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-024443 -n old-k8s-version-024443
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-024443 -n old-k8s-version-024443: exit status 7 (71.997553ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-024443 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-024443 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-024443 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.603616456s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-024443 -n old-k8s-version-024443
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-406541 -n no-preload-406541
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-406541 -n no-preload-406541: exit status 7 (67.893813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-406541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (46.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-406541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-406541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.105821906s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-406541 -n no-preload-406541
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (46.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-028309 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cefc36cd-351a-479e-b06d-eca09ed979eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cefc36cd-351a-479e-b06d-eca09ed979eb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003767829s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-028309 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-175371 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d7e2785e-4860-4f2d-af78-a6a7770e8f29] Pending
helpers_test.go:352: "busybox" [d7e2785e-4860-4f2d-af78-a6a7770e8f29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d7e2785e-4860-4f2d-af78-a6a7770e8f29] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003357721s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-175371 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-028309 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-028309 --alsologtostderr -v=3: (16.658356667s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-175371 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-175371 --alsologtostderr -v=3: (18.154783245s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-028309 -n default-k8s-diff-port-028309
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-028309 -n default-k8s-diff-port-028309: exit status 7 (69.832799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-028309 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-028309 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.692031912s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-028309 -n default-k8s-diff-port-028309
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175371 -n embed-certs-175371
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175371 -n embed-certs-175371: exit status 7 (68.920564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-175371 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-175371 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-175371 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.281549063s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-175371 -n embed-certs-175371
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v6qwc" [9141356b-0963-420d-826b-5d8a8760e89c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004031206s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7gk7m" [daca9387-7b3a-4193-b10d-25e2c8a391dd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003612981s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v6qwc" [9141356b-0963-420d-826b-5d8a8760e89c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003463779s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-406541 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7gk7m" [daca9387-7b3a-4193-b10d-25e2c8a391dd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003686329s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-024443 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-406541 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-024443 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-579606 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-579606 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (26.454376979s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lmkc8" [0cbea8ce-2682-493f-a179-dc61658f2ed9] Running
E1018 12:19:09.845196    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/auto-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:12.406697    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/auto-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003246408s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lmkc8" [0cbea8ce-2682-493f-a179-dc61658f2ed9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004020719s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-028309 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z4wqj" [9162a212-7249-4ae3-a9ee-877a66ae4adf] Running
E1018 12:19:17.528210    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/auto-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004036937s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-028309 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z4wqj" [9162a212-7249-4ae3-a9ee-877a66ae4adf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00381381s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-175371 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-579606 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-579606 --alsologtostderr -v=3: (12.542711108s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-175371 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-579606 -n newest-cni-579606
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-579606 -n newest-cni-579606: exit status 7 (69.230006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-579606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-579606 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1018 12:19:36.182445    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/kindnet-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:41.304752    9360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-5865/.minikube/profiles/kindnet-376567/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-579606 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.336040652s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-579606 -n newest-cni-579606
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-579606 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    

Test skip (26/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-376567 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-376567

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-376567

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-376567

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-376567

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-376567

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-376567

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-376567

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-376567

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-376567

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-376567

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-376567

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-376567" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-376567" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-376567

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376567"

                                                
                                                
----------------------- debugLogs end: kubenet-376567 [took: 3.887602292s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-376567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-376567
--- SKIP: TestNetworkPlugins/group/kubenet (4.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-376567 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-376567

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-376567

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-376567

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-376567

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-376567

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-376567

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-376567

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-376567

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-376567

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-376567

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-376567

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-376567" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-376567

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-376567

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-376567

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-376567

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-376567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-376567" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-376567

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-376567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376567"

                                                
                                                
----------------------- debugLogs end: cilium-376567 [took: 3.860484462s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-376567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-376567
--- SKIP: TestNetworkPlugins/group/cilium (4.06s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-200198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-200198
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard