Test Report: Docker_Linux_crio 21800

                    
                      bb40a8e434b348a4cf46a27f5566e4aff121b396:2025-10-29:42116
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.26
35 TestAddons/parallel/Registry 13.13
36 TestAddons/parallel/RegistryCreds 0.44
37 TestAddons/parallel/Ingress 146.4
38 TestAddons/parallel/InspektorGadget 5.3
39 TestAddons/parallel/MetricsServer 5.33
41 TestAddons/parallel/CSI 49.46
42 TestAddons/parallel/Headlamp 2.62
43 TestAddons/parallel/CloudSpanner 5.27
44 TestAddons/parallel/LocalPath 10.15
45 TestAddons/parallel/NvidiaDevicePlugin 5.26
46 TestAddons/parallel/Yakd 5.26
47 TestAddons/parallel/AmdGpuDevicePlugin 5.27
97 TestFunctional/parallel/ServiceCmdConnect 603.12
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.05
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.37
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.75
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
144 TestFunctional/parallel/ServiceCmd/DeployApp 600.65
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
153 TestFunctional/parallel/ServiceCmd/Format 0.55
154 TestFunctional/parallel/ServiceCmd/URL 0.55
191 TestJSONOutput/pause/Command 2.32
197 TestJSONOutput/unpause/Command 1.77
279 TestPause/serial/Pause 6.54
349 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.52
351 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.15
352 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.26
362 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.15
371 TestStartStop/group/old-k8s-version/serial/Pause 6.37
375 TestStartStop/group/embed-certs/serial/Pause 6.76
377 TestStartStop/group/no-preload/serial/Pause 8
383 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.13
386 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.02
392 TestStartStop/group/newest-cni/serial/Pause 5.99
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306574 addons disable volcano --alsologtostderr -v=1: exit status 11 (261.353734ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:22:37.513283   17229 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:22:37.513834   17229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:37.513852   17229 out.go:374] Setting ErrFile to fd 2...
	I1029 08:22:37.513858   17229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:37.514773   17229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:22:37.515191   17229 mustload.go:66] Loading cluster: addons-306574
	I1029 08:22:37.515576   17229 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:37.515594   17229 addons.go:607] checking whether the cluster is paused
	I1029 08:22:37.515695   17229 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:37.515715   17229 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:22:37.516147   17229 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:22:37.536858   17229 ssh_runner.go:195] Run: systemctl --version
	I1029 08:22:37.536914   17229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:22:37.554917   17229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:22:37.655257   17229 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:22:37.655350   17229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:22:37.685157   17229 cri.go:89] found id: "d99432f91672d8c1d47f840f261c261a51f13955e2e017761d646d32c313587e"
	I1029 08:22:37.685178   17229 cri.go:89] found id: "9b8691b1023f8ec0251c9d8bc6b0b7d0fd5fea3ab2e89770788c518fb6ca60da"
	I1029 08:22:37.685182   17229 cri.go:89] found id: "e573e53bb23e05e993a3f287fd82b620791eb65b7ea5b1a15f165ae62002efa6"
	I1029 08:22:37.685185   17229 cri.go:89] found id: "e8d80d0af78a66b1efd4b4234f27af0b707a55e687da42d583a0ccac1dbc984f"
	I1029 08:22:37.685188   17229 cri.go:89] found id: "2e36d72e127f635b90fd8546ab02d09ce6d8f7f6b4a95e36f8f801a783fecf9d"
	I1029 08:22:37.685191   17229 cri.go:89] found id: "0c6b81641534122ae540f448d067dbd84061141d3143381b755c3d7f7ca5a3ab"
	I1029 08:22:37.685195   17229 cri.go:89] found id: "65705ac3c758b666b1f55a5f8f6d8e4d57b32abe7b2d77235aee8212d4b5d20f"
	I1029 08:22:37.685198   17229 cri.go:89] found id: "1fe5195ca5ae1f93eaf51b4e0526a47a8bb3bfa8e424a07fb66a6a9f81e65eb9"
	I1029 08:22:37.685202   17229 cri.go:89] found id: "5371c6256fe4ea45e90d7c3585d7bb2374128bb029b65fdcf3fdd6c3cb545117"
	I1029 08:22:37.685209   17229 cri.go:89] found id: "9b0733b1c46f199694daba309c64792dced18ab0f2b83341c9bb8a464c908faa"
	I1029 08:22:37.685216   17229 cri.go:89] found id: "197632c3e4940a86b33c2118188a5166f3fe06ff6190898cb1f2df1fbe028d75"
	I1029 08:22:37.685220   17229 cri.go:89] found id: "59926386ecab3854c03a360c1116f1d37b0a73222f47ac804c45f9d2d657b8e4"
	I1029 08:22:37.685224   17229 cri.go:89] found id: "570224acd5072d8b3a0600cf1fcca347b6bbac8f4c411782d6c8460dce94d302"
	I1029 08:22:37.685228   17229 cri.go:89] found id: "217f45f262a57c7196f21057bc6190abc33ede848212337a47b452ce93dae999"
	I1029 08:22:37.685232   17229 cri.go:89] found id: "6a10f82f1439a50fe59f05dd964f098ffac531c82cea62036993200facf7e0e6"
	I1029 08:22:37.685238   17229 cri.go:89] found id: "ff1e52067a5c8509cce80abc4079064ad91518967401e8a972d0957857b94894"
	I1029 08:22:37.685244   17229 cri.go:89] found id: "ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db"
	I1029 08:22:37.685250   17229 cri.go:89] found id: "11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3"
	I1029 08:22:37.685255   17229 cri.go:89] found id: "2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326"
	I1029 08:22:37.685258   17229 cri.go:89] found id: "a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110"
	I1029 08:22:37.685261   17229 cri.go:89] found id: "90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67"
	I1029 08:22:37.685268   17229 cri.go:89] found id: "56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3"
	I1029 08:22:37.685272   17229 cri.go:89] found id: "a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1"
	I1029 08:22:37.685292   17229 cri.go:89] found id: "49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd"
	I1029 08:22:37.685297   17229 cri.go:89] found id: ""
	I1029 08:22:37.685348   17229 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:22:37.700137   17229 out.go:203] 
	W1029 08:22:37.701559   17229 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:22:37.701589   17229 out.go:285] * 
	* 
	W1029 08:22:37.704635   17229 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:22:37.706157   17229 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-306574 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.451263ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-782gg" [d6b59cbc-13f3-4137-ada6-66822061f960] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002541912s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-b9mf9" [73f9106c-8bd1-4a4c-9389-08df4ebf334e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003238661s
addons_test.go:392: (dbg) Run:  kubectl --context addons-306574 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-306574 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-306574 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.655116605s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 ip
2025/10/29 08:22:58 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306574 addons disable registry --alsologtostderr -v=1: exit status 11 (252.918311ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:22:58.475208   19764 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:22:58.475369   19764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:58.475377   19764 out.go:374] Setting ErrFile to fd 2...
	I1029 08:22:58.475382   19764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:58.475564   19764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:22:58.475808   19764 mustload.go:66] Loading cluster: addons-306574
	I1029 08:22:58.476171   19764 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:58.476186   19764 addons.go:607] checking whether the cluster is paused
	I1029 08:22:58.476266   19764 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:58.476281   19764 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:22:58.476637   19764 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:22:58.494885   19764 ssh_runner.go:195] Run: systemctl --version
	I1029 08:22:58.494956   19764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:22:58.513561   19764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:22:58.612915   19764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:22:58.613020   19764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:22:58.642240   19764 cri.go:89] found id: "d99432f91672d8c1d47f840f261c261a51f13955e2e017761d646d32c313587e"
	I1029 08:22:58.642265   19764 cri.go:89] found id: "9b8691b1023f8ec0251c9d8bc6b0b7d0fd5fea3ab2e89770788c518fb6ca60da"
	I1029 08:22:58.642272   19764 cri.go:89] found id: "e573e53bb23e05e993a3f287fd82b620791eb65b7ea5b1a15f165ae62002efa6"
	I1029 08:22:58.642277   19764 cri.go:89] found id: "e8d80d0af78a66b1efd4b4234f27af0b707a55e687da42d583a0ccac1dbc984f"
	I1029 08:22:58.642281   19764 cri.go:89] found id: "2e36d72e127f635b90fd8546ab02d09ce6d8f7f6b4a95e36f8f801a783fecf9d"
	I1029 08:22:58.642287   19764 cri.go:89] found id: "0c6b81641534122ae540f448d067dbd84061141d3143381b755c3d7f7ca5a3ab"
	I1029 08:22:58.642291   19764 cri.go:89] found id: "65705ac3c758b666b1f55a5f8f6d8e4d57b32abe7b2d77235aee8212d4b5d20f"
	I1029 08:22:58.642296   19764 cri.go:89] found id: "1fe5195ca5ae1f93eaf51b4e0526a47a8bb3bfa8e424a07fb66a6a9f81e65eb9"
	I1029 08:22:58.642299   19764 cri.go:89] found id: "5371c6256fe4ea45e90d7c3585d7bb2374128bb029b65fdcf3fdd6c3cb545117"
	I1029 08:22:58.642314   19764 cri.go:89] found id: "9b0733b1c46f199694daba309c64792dced18ab0f2b83341c9bb8a464c908faa"
	I1029 08:22:58.642318   19764 cri.go:89] found id: "197632c3e4940a86b33c2118188a5166f3fe06ff6190898cb1f2df1fbe028d75"
	I1029 08:22:58.642322   19764 cri.go:89] found id: "59926386ecab3854c03a360c1116f1d37b0a73222f47ac804c45f9d2d657b8e4"
	I1029 08:22:58.642327   19764 cri.go:89] found id: "570224acd5072d8b3a0600cf1fcca347b6bbac8f4c411782d6c8460dce94d302"
	I1029 08:22:58.642331   19764 cri.go:89] found id: "217f45f262a57c7196f21057bc6190abc33ede848212337a47b452ce93dae999"
	I1029 08:22:58.642336   19764 cri.go:89] found id: "6a10f82f1439a50fe59f05dd964f098ffac531c82cea62036993200facf7e0e6"
	I1029 08:22:58.642342   19764 cri.go:89] found id: "ff1e52067a5c8509cce80abc4079064ad91518967401e8a972d0957857b94894"
	I1029 08:22:58.642350   19764 cri.go:89] found id: "ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db"
	I1029 08:22:58.642356   19764 cri.go:89] found id: "11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3"
	I1029 08:22:58.642360   19764 cri.go:89] found id: "2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326"
	I1029 08:22:58.642364   19764 cri.go:89] found id: "a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110"
	I1029 08:22:58.642371   19764 cri.go:89] found id: "90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67"
	I1029 08:22:58.642378   19764 cri.go:89] found id: "56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3"
	I1029 08:22:58.642380   19764 cri.go:89] found id: "a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1"
	I1029 08:22:58.642383   19764 cri.go:89] found id: "49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd"
	I1029 08:22:58.642417   19764 cri.go:89] found id: ""
	I1029 08:22:58.642505   19764 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:22:58.656769   19764 out.go:203] 
	W1029 08:22:58.658217   19764 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:22:58.658236   19764 out.go:285] * 
	* 
	W1029 08:22:58.661400   19764 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:22:58.662728   19764 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-306574 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.13s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.44s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.951282ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-306574
addons_test.go:332: (dbg) Run:  kubectl --context addons-306574 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306574 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (263.10364ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:22:58.896124   19865 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:22:58.896564   19865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:58.896582   19865 out.go:374] Setting ErrFile to fd 2...
	I1029 08:22:58.896589   19865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:58.897056   19865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:22:58.897679   19865 mustload.go:66] Loading cluster: addons-306574
	I1029 08:22:58.898074   19865 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:58.898093   19865 addons.go:607] checking whether the cluster is paused
	I1029 08:22:58.898182   19865 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:58.898198   19865 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:22:58.898541   19865 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:22:58.918199   19865 ssh_runner.go:195] Run: systemctl --version
	I1029 08:22:58.918252   19865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:22:58.937667   19865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:22:59.038199   19865 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:22:59.038282   19865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:22:59.074881   19865 cri.go:89] found id: "d99432f91672d8c1d47f840f261c261a51f13955e2e017761d646d32c313587e"
	I1029 08:22:59.074926   19865 cri.go:89] found id: "9b8691b1023f8ec0251c9d8bc6b0b7d0fd5fea3ab2e89770788c518fb6ca60da"
	I1029 08:22:59.074936   19865 cri.go:89] found id: "e573e53bb23e05e993a3f287fd82b620791eb65b7ea5b1a15f165ae62002efa6"
	I1029 08:22:59.074941   19865 cri.go:89] found id: "e8d80d0af78a66b1efd4b4234f27af0b707a55e687da42d583a0ccac1dbc984f"
	I1029 08:22:59.074945   19865 cri.go:89] found id: "2e36d72e127f635b90fd8546ab02d09ce6d8f7f6b4a95e36f8f801a783fecf9d"
	I1029 08:22:59.074951   19865 cri.go:89] found id: "0c6b81641534122ae540f448d067dbd84061141d3143381b755c3d7f7ca5a3ab"
	I1029 08:22:59.074955   19865 cri.go:89] found id: "65705ac3c758b666b1f55a5f8f6d8e4d57b32abe7b2d77235aee8212d4b5d20f"
	I1029 08:22:59.074960   19865 cri.go:89] found id: "1fe5195ca5ae1f93eaf51b4e0526a47a8bb3bfa8e424a07fb66a6a9f81e65eb9"
	I1029 08:22:59.074964   19865 cri.go:89] found id: "5371c6256fe4ea45e90d7c3585d7bb2374128bb029b65fdcf3fdd6c3cb545117"
	I1029 08:22:59.074973   19865 cri.go:89] found id: "9b0733b1c46f199694daba309c64792dced18ab0f2b83341c9bb8a464c908faa"
	I1029 08:22:59.074981   19865 cri.go:89] found id: "197632c3e4940a86b33c2118188a5166f3fe06ff6190898cb1f2df1fbe028d75"
	I1029 08:22:59.074986   19865 cri.go:89] found id: "59926386ecab3854c03a360c1116f1d37b0a73222f47ac804c45f9d2d657b8e4"
	I1029 08:22:59.075008   19865 cri.go:89] found id: "570224acd5072d8b3a0600cf1fcca347b6bbac8f4c411782d6c8460dce94d302"
	I1029 08:22:59.075012   19865 cri.go:89] found id: "217f45f262a57c7196f21057bc6190abc33ede848212337a47b452ce93dae999"
	I1029 08:22:59.075016   19865 cri.go:89] found id: "6a10f82f1439a50fe59f05dd964f098ffac531c82cea62036993200facf7e0e6"
	I1029 08:22:59.075024   19865 cri.go:89] found id: "ff1e52067a5c8509cce80abc4079064ad91518967401e8a972d0957857b94894"
	I1029 08:22:59.075032   19865 cri.go:89] found id: "ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db"
	I1029 08:22:59.075038   19865 cri.go:89] found id: "11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3"
	I1029 08:22:59.075042   19865 cri.go:89] found id: "2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326"
	I1029 08:22:59.075046   19865 cri.go:89] found id: "a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110"
	I1029 08:22:59.075055   19865 cri.go:89] found id: "90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67"
	I1029 08:22:59.075059   19865 cri.go:89] found id: "56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3"
	I1029 08:22:59.075063   19865 cri.go:89] found id: "a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1"
	I1029 08:22:59.075067   19865 cri.go:89] found id: "49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd"
	I1029 08:22:59.075070   19865 cri.go:89] found id: ""
	I1029 08:22:59.075126   19865 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:22:59.090691   19865 out.go:203] 
	W1029 08:22:59.092496   19865 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:22:59.092516   19865 out.go:285] * 
	* 
	W1029 08:22:59.096528   19865 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:22:59.097841   19865 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-306574 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-306574 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-306574 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-306574 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [f9d992c3-5da7-450b-8d2b-70117cb2829a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [f9d992c3-5da7-450b-8d2b-70117cb2829a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004182053s
I1029 08:23:02.875541    7218 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306574 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.782890372s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-306574 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-306574
helpers_test.go:243: (dbg) docker inspect addons-306574:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "64d25cf53f5bfb5e01ec026ef15e2dbd60d95b3a435ac8db06862165e005aa1c",
	        "Created": "2025-10-29T08:20:31.838258712Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 9193,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T08:20:31.883270971Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/64d25cf53f5bfb5e01ec026ef15e2dbd60d95b3a435ac8db06862165e005aa1c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/64d25cf53f5bfb5e01ec026ef15e2dbd60d95b3a435ac8db06862165e005aa1c/hostname",
	        "HostsPath": "/var/lib/docker/containers/64d25cf53f5bfb5e01ec026ef15e2dbd60d95b3a435ac8db06862165e005aa1c/hosts",
	        "LogPath": "/var/lib/docker/containers/64d25cf53f5bfb5e01ec026ef15e2dbd60d95b3a435ac8db06862165e005aa1c/64d25cf53f5bfb5e01ec026ef15e2dbd60d95b3a435ac8db06862165e005aa1c-json.log",
	        "Name": "/addons-306574",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-306574:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-306574",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "64d25cf53f5bfb5e01ec026ef15e2dbd60d95b3a435ac8db06862165e005aa1c",
	                "LowerDir": "/var/lib/docker/overlay2/582dfceb6524e2af206343427a7d6df2b0c2f63bddc0a11f512404555061131a-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/582dfceb6524e2af206343427a7d6df2b0c2f63bddc0a11f512404555061131a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/582dfceb6524e2af206343427a7d6df2b0c2f63bddc0a11f512404555061131a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/582dfceb6524e2af206343427a7d6df2b0c2f63bddc0a11f512404555061131a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-306574",
	                "Source": "/var/lib/docker/volumes/addons-306574/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-306574",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-306574",
	                "name.minikube.sigs.k8s.io": "addons-306574",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08a554e130cebdb108114b821dda4c7a12a11e230f8aa02b9bdf9687d2909484",
	            "SandboxKey": "/var/run/docker/netns/08a554e130ce",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-306574": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:e5:6f:ca:dd:d3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "829551abfb51934fbddb1ff5d175377487c2b351054b25edadba7ad2f11d302a",
	                    "EndpointID": "dc28ab56d90a0d083579dad71621695248a8a160dbd9fbdf1bca6ec3a985dc31",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-306574",
	                        "64d25cf53f5b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-306574 -n addons-306574
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-306574 logs -n 25: (1.155092907s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-110152 --alsologtostderr --binary-mirror http://127.0.0.1:45439 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-110152 │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ delete  │ -p binary-mirror-110152                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-110152 │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ addons  │ enable dashboard -p addons-306574                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ addons  │ disable dashboard -p addons-306574                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ start   │ -p addons-306574 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:22 UTC │
	│ addons  │ addons-306574 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:22 UTC │                     │
	│ addons  │ addons-306574 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:22 UTC │                     │
	│ addons  │ enable headlamp -p addons-306574 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:22 UTC │                     │
	│ addons  │ addons-306574 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:22 UTC │                     │
	│ addons  │ addons-306574 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:22 UTC │                     │
	│ addons  │ addons-306574 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:22 UTC │                     │
	│ addons  │ addons-306574 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:22 UTC │                     │
	│ addons  │ addons-306574 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:22 UTC │                     │
	│ ip      │ addons-306574 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:22 UTC │ 29 Oct 25 08:22 UTC │
	│ addons  │ addons-306574 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:22 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-306574                                                                                                                                                                                                                                                                                                                                                                                           │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:22 UTC │ 29 Oct 25 08:22 UTC │
	│ addons  │ addons-306574 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:22 UTC │                     │
	│ addons  │ addons-306574 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │                     │
	│ ssh     │ addons-306574 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │                     │
	│ addons  │ addons-306574 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │                     │
	│ ssh     │ addons-306574 ssh cat /opt/local-path-provisioner/pvc-58b50d07-4433-469f-9454-6e846c678332_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │ 29 Oct 25 08:23 UTC │
	│ addons  │ addons-306574 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │                     │
	│ addons  │ addons-306574 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │                     │
	│ addons  │ addons-306574 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │                     │
	│ ip      │ addons-306574 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-306574        │ jenkins │ v1.37.0 │ 29 Oct 25 08:25 UTC │ 29 Oct 25 08:25 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 08:20:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 08:20:09.906757    8556 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:20:09.906896    8556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:20:09.906912    8556 out.go:374] Setting ErrFile to fd 2...
	I1029 08:20:09.906918    8556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:20:09.907149    8556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:20:09.907700    8556 out.go:368] Setting JSON to false
	I1029 08:20:09.908542    8556 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":158,"bootTime":1761725852,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 08:20:09.908624    8556 start.go:143] virtualization: kvm guest
	I1029 08:20:09.910438    8556 out.go:179] * [addons-306574] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 08:20:09.911971    8556 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:20:09.912004    8556 notify.go:221] Checking for updates...
	I1029 08:20:09.914600    8556 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:20:09.915963    8556 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 08:20:09.917359    8556 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 08:20:09.918693    8556 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 08:20:09.919979    8556 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:20:09.921496    8556 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:20:09.944084    8556 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 08:20:09.944169    8556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:20:10.000881    8556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-29 08:20:09.99117085 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 08:20:10.001037    8556 docker.go:319] overlay module found
	I1029 08:20:10.002704    8556 out.go:179] * Using the docker driver based on user configuration
	I1029 08:20:10.003865    8556 start.go:309] selected driver: docker
	I1029 08:20:10.003892    8556 start.go:930] validating driver "docker" against <nil>
	I1029 08:20:10.003906    8556 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:20:10.004613    8556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:20:10.060400    8556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-29 08:20:10.051126923 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 08:20:10.060598    8556 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 08:20:10.060811    8556 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:20:10.062305    8556 out.go:179] * Using Docker driver with root privileges
	I1029 08:20:10.063410    8556 cni.go:84] Creating CNI manager for ""
	I1029 08:20:10.063482    8556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:20:10.063495    8556 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 08:20:10.063561    8556 start.go:353] cluster config:
	{Name:addons-306574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-306574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1029 08:20:10.064973    8556 out.go:179] * Starting "addons-306574" primary control-plane node in "addons-306574" cluster
	I1029 08:20:10.066316    8556 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:20:10.067638    8556 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:20:10.068808    8556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:20:10.068851    8556 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 08:20:10.068861    8556 cache.go:59] Caching tarball of preloaded images
	I1029 08:20:10.068911    8556 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:20:10.068935    8556 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 08:20:10.068943    8556 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:20:10.069303    8556 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/config.json ...
	I1029 08:20:10.069330    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/config.json: {Name:mk5f603a5977d4732cb43592e784826e5c098291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:10.086271    8556 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1029 08:20:10.086383    8556 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1029 08:20:10.086400    8556 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1029 08:20:10.086404    8556 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1029 08:20:10.086411    8556 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1029 08:20:10.086418    8556 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1029 08:20:23.357314    8556 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1029 08:20:23.357364    8556 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:20:23.357397    8556 start.go:360] acquireMachinesLock for addons-306574: {Name:mkb2bc35c8399927cc17b5ede24d6fc9e49bd344 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:20:23.357506    8556 start.go:364] duration metric: took 92.938µs to acquireMachinesLock for "addons-306574"
	I1029 08:20:23.357533    8556 start.go:93] Provisioning new machine with config: &{Name:addons-306574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-306574 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:20:23.357621    8556 start.go:125] createHost starting for "" (driver="docker")
	I1029 08:20:23.359355    8556 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1029 08:20:23.359596    8556 start.go:159] libmachine.API.Create for "addons-306574" (driver="docker")
	I1029 08:20:23.359631    8556 client.go:173] LocalClient.Create starting
	I1029 08:20:23.359735    8556 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem
	I1029 08:20:23.528657    8556 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem
	I1029 08:20:23.910424    8556 cli_runner.go:164] Run: docker network inspect addons-306574 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1029 08:20:23.927823    8556 cli_runner.go:211] docker network inspect addons-306574 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1029 08:20:23.927894    8556 network_create.go:284] running [docker network inspect addons-306574] to gather additional debugging logs...
	I1029 08:20:23.927918    8556 cli_runner.go:164] Run: docker network inspect addons-306574
	W1029 08:20:23.945148    8556 cli_runner.go:211] docker network inspect addons-306574 returned with exit code 1
	I1029 08:20:23.945177    8556 network_create.go:287] error running [docker network inspect addons-306574]: docker network inspect addons-306574: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-306574 not found
	I1029 08:20:23.945200    8556 network_create.go:289] output of [docker network inspect addons-306574]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-306574 not found
	
	** /stderr **
	I1029 08:20:23.945328    8556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:20:23.962934    8556 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d27c70}
	I1029 08:20:23.962982    8556 network_create.go:124] attempt to create docker network addons-306574 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1029 08:20:23.963053    8556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-306574 addons-306574
	I1029 08:20:24.023583    8556 network_create.go:108] docker network addons-306574 192.168.49.0/24 created
	I1029 08:20:24.023615    8556 kic.go:121] calculated static IP "192.168.49.2" for the "addons-306574" container
	I1029 08:20:24.023685    8556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1029 08:20:24.042007    8556 cli_runner.go:164] Run: docker volume create addons-306574 --label name.minikube.sigs.k8s.io=addons-306574 --label created_by.minikube.sigs.k8s.io=true
	I1029 08:20:24.060146    8556 oci.go:103] Successfully created a docker volume addons-306574
	I1029 08:20:24.060229    8556 cli_runner.go:164] Run: docker run --rm --name addons-306574-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-306574 --entrypoint /usr/bin/test -v addons-306574:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1029 08:20:27.317149    8556 cli_runner.go:217] Completed: docker run --rm --name addons-306574-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-306574 --entrypoint /usr/bin/test -v addons-306574:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (3.256885266s)
	I1029 08:20:27.317186    8556 oci.go:107] Successfully prepared a docker volume addons-306574
	I1029 08:20:27.317212    8556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:20:27.317237    8556 kic.go:194] Starting extracting preloaded images to volume ...
	I1029 08:20:27.317299    8556 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-306574:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1029 08:20:31.763947    8556 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-306574:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.446598915s)
	I1029 08:20:31.763977    8556 kic.go:203] duration metric: took 4.446738239s to extract preloaded images to volume ...
	W1029 08:20:31.764089    8556 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1029 08:20:31.764129    8556 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1029 08:20:31.764166    8556 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1029 08:20:31.821753    8556 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-306574 --name addons-306574 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-306574 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-306574 --network addons-306574 --ip 192.168.49.2 --volume addons-306574:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1029 08:20:32.133867    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Running}}
	I1029 08:20:32.154974    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:32.174763    8556 cli_runner.go:164] Run: docker exec addons-306574 stat /var/lib/dpkg/alternatives/iptables
	I1029 08:20:32.221358    8556 oci.go:144] the created container "addons-306574" has a running status.
	I1029 08:20:32.221394    8556 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa...
	I1029 08:20:32.374637    8556 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1029 08:20:32.403505    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:32.421721    8556 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1029 08:20:32.421744    8556 kic_runner.go:114] Args: [docker exec --privileged addons-306574 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1029 08:20:32.477582    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:32.500046    8556 machine.go:94] provisionDockerMachine start ...
	I1029 08:20:32.500144    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:32.521940    8556 main.go:143] libmachine: Using SSH client type: native
	I1029 08:20:32.522290    8556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1029 08:20:32.522307    8556 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:20:32.667953    8556 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-306574
	
	I1029 08:20:32.668010    8556 ubuntu.go:182] provisioning hostname "addons-306574"
	I1029 08:20:32.668095    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:32.687731    8556 main.go:143] libmachine: Using SSH client type: native
	I1029 08:20:32.687980    8556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1029 08:20:32.688014    8556 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-306574 && echo "addons-306574" | sudo tee /etc/hostname
	I1029 08:20:32.841191    8556 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-306574
	
	I1029 08:20:32.841266    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:32.861731    8556 main.go:143] libmachine: Using SSH client type: native
	I1029 08:20:32.861983    8556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1029 08:20:32.862021    8556 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-306574' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-306574/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-306574' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:20:33.003899    8556 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:20:33.003933    8556 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-3727/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-3727/.minikube}
	I1029 08:20:33.003974    8556 ubuntu.go:190] setting up certificates
	I1029 08:20:33.004006    8556 provision.go:84] configureAuth start
	I1029 08:20:33.004078    8556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-306574
	I1029 08:20:33.022143    8556 provision.go:143] copyHostCerts
	I1029 08:20:33.022216    8556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem (1078 bytes)
	I1029 08:20:33.022343    8556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem (1123 bytes)
	I1029 08:20:33.022403    8556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem (1679 bytes)
	I1029 08:20:33.022459    8556 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem org=jenkins.addons-306574 san=[127.0.0.1 192.168.49.2 addons-306574 localhost minikube]
	I1029 08:20:33.253245    8556 provision.go:177] copyRemoteCerts
	I1029 08:20:33.253302    8556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:20:33.253335    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:33.271402    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:33.372257    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1029 08:20:33.392440    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1029 08:20:33.410707    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1029 08:20:33.428466    8556 provision.go:87] duration metric: took 424.443345ms to configureAuth
	I1029 08:20:33.428490    8556 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:20:33.428681    8556 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:20:33.428798    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:33.448437    8556 main.go:143] libmachine: Using SSH client type: native
	I1029 08:20:33.448676    8556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1029 08:20:33.448701    8556 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:20:33.703864    8556 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:20:33.703893    8556 machine.go:97] duration metric: took 1.203820217s to provisionDockerMachine
	I1029 08:20:33.703906    8556 client.go:176] duration metric: took 10.344268871s to LocalClient.Create
	I1029 08:20:33.703933    8556 start.go:167] duration metric: took 10.344336656s to libmachine.API.Create "addons-306574"
	I1029 08:20:33.703944    8556 start.go:293] postStartSetup for "addons-306574" (driver="docker")
	I1029 08:20:33.703957    8556 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:20:33.704039    8556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:20:33.704089    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:33.722678    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:33.825369    8556 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:20:33.829137    8556 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:20:33.829164    8556 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:20:33.829175    8556 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/addons for local assets ...
	I1029 08:20:33.829257    8556 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/files for local assets ...
	I1029 08:20:33.829294    8556 start.go:296] duration metric: took 125.343097ms for postStartSetup
	I1029 08:20:33.829674    8556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-306574
	I1029 08:20:33.847441    8556 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/config.json ...
	I1029 08:20:33.847735    8556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:20:33.847784    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:33.864929    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:33.963295    8556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:20:33.967828    8556 start.go:128] duration metric: took 10.610190443s to createHost
	I1029 08:20:33.967855    8556 start.go:83] releasing machines lock for "addons-306574", held for 10.610336125s
	I1029 08:20:33.967918    8556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-306574
	I1029 08:20:33.985784    8556 ssh_runner.go:195] Run: cat /version.json
	I1029 08:20:33.985840    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:33.985854    8556 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:20:33.985915    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:34.004200    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:34.008194    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:34.102298    8556 ssh_runner.go:195] Run: systemctl --version
	I1029 08:20:34.163972    8556 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:20:34.200580    8556 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:20:34.205362    8556 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:20:34.205431    8556 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:20:34.232483    8556 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1029 08:20:34.232510    8556 start.go:496] detecting cgroup driver to use...
	I1029 08:20:34.232542    8556 detect.go:190] detected "systemd" cgroup driver on host os
	I1029 08:20:34.232586    8556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:20:34.248262    8556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:20:34.260721    8556 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:20:34.260769    8556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:20:34.277484    8556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:20:34.296115    8556 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:20:34.378206    8556 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:20:34.465971    8556 docker.go:234] disabling docker service ...
	I1029 08:20:34.466051    8556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:20:34.483897    8556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:20:34.496865    8556 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:20:34.581350    8556 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:20:34.662554    8556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:20:34.674694    8556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:20:34.688350    8556 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:20:34.688401    8556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:20:34.699123    8556 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1029 08:20:34.699181    8556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:20:34.708321    8556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:20:34.717640    8556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:20:34.726743    8556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:20:34.735766    8556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:20:34.745125    8556 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:20:34.759351    8556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:20:34.768763    8556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:20:34.776273    8556 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1029 08:20:34.776339    8556 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1029 08:20:34.789191    8556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:20:34.797829    8556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:20:34.874729    8556 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:20:34.985627    8556 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:20:34.985702    8556 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:20:34.989762    8556 start.go:564] Will wait 60s for crictl version
	I1029 08:20:34.989815    8556 ssh_runner.go:195] Run: which crictl
	I1029 08:20:34.993493    8556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:20:35.018217    8556 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:20:35.018341    8556 ssh_runner.go:195] Run: crio --version
	I1029 08:20:35.046556    8556 ssh_runner.go:195] Run: crio --version
	I1029 08:20:35.075415    8556 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:20:35.076578    8556 cli_runner.go:164] Run: docker network inspect addons-306574 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:20:35.093360    8556 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:20:35.097529    8556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:20:35.108105    8556 kubeadm.go:884] updating cluster {Name:addons-306574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-306574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 08:20:35.108213    8556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:20:35.108263    8556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:20:35.139296    8556 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:20:35.139319    8556 crio.go:433] Images already preloaded, skipping extraction
	I1029 08:20:35.139377    8556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:20:35.164831    8556 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:20:35.164857    8556 cache_images.go:86] Images are preloaded, skipping loading
	I1029 08:20:35.164866    8556 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1029 08:20:35.164960    8556 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-306574 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-306574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:20:35.165047    8556 ssh_runner.go:195] Run: crio config
	I1029 08:20:35.208252    8556 cni.go:84] Creating CNI manager for ""
	I1029 08:20:35.208276    8556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:20:35.208297    8556 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 08:20:35.208318    8556 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-306574 NodeName:addons-306574 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 08:20:35.208454    8556 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-306574"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 08:20:35.208513    8556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:20:35.216691    8556 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:20:35.216777    8556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 08:20:35.224812    8556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1029 08:20:35.238314    8556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:20:35.254187    8556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1029 08:20:35.266650    8556 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1029 08:20:35.270351    8556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:20:35.279970    8556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:20:35.360884    8556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:20:35.387704    8556 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574 for IP: 192.168.49.2
	I1029 08:20:35.387732    8556 certs.go:195] generating shared ca certs ...
	I1029 08:20:35.387754    8556 certs.go:227] acquiring lock for ca certs: {Name:mk2fcaaead4b0fcf1dc2cfc80d95b3cc12092f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:35.387963    8556 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key
	I1029 08:20:35.648830    8556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt ...
	I1029 08:20:35.648867    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt: {Name:mk19434f5fe1032a86a95cec63e899c58bd71e68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:35.649101    8556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key ...
	I1029 08:20:35.649120    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key: {Name:mkf48a9e65e1fc5deb4dbacbb470b77a0ea967b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:35.649230    8556 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key
	I1029 08:20:36.202367    8556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt ...
	I1029 08:20:36.202411    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt: {Name:mk90b59f269020c36a09edabc548ec68458d54fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:36.202601    8556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key ...
	I1029 08:20:36.202616    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key: {Name:mkb7777a792c226f0bdd072bde419b5711b07f48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:36.202686    8556 certs.go:257] generating profile certs ...
	I1029 08:20:36.202741    8556 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.key
	I1029 08:20:36.202757    8556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt with IP's: []
	I1029 08:20:36.438899    8556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt ...
	I1029 08:20:36.438934    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: {Name:mkdb3c610cb943dacfd4b86491b16143782c58e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:36.439139    8556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.key ...
	I1029 08:20:36.439154    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.key: {Name:mk4714e932773a8b002dca10872328e2ffd71de1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:36.439223    8556 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.key.5c4d1c1a
	I1029 08:20:36.439242    8556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.crt.5c4d1c1a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1029 08:20:36.997184    8556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.crt.5c4d1c1a ...
	I1029 08:20:36.997217    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.crt.5c4d1c1a: {Name:mk49fb4a8ef797f5a910f20b574ebbef85fb6c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:36.997394    8556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.key.5c4d1c1a ...
	I1029 08:20:36.997408    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.key.5c4d1c1a: {Name:mkebc158fb51d02e09da9dfd7eb30396310b38f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:36.997486    8556 certs.go:382] copying /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.crt.5c4d1c1a -> /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.crt
	I1029 08:20:36.997590    8556 certs.go:386] copying /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.key.5c4d1c1a -> /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.key
	I1029 08:20:36.997653    8556 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/proxy-client.key
	I1029 08:20:36.997671    8556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/proxy-client.crt with IP's: []
	I1029 08:20:37.106436    8556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/proxy-client.crt ...
	I1029 08:20:37.106465    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/proxy-client.crt: {Name:mk587246a2c54d75286b921b755d7486a9e60cb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:37.106640    8556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/proxy-client.key ...
	I1029 08:20:37.106653    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/proxy-client.key: {Name:mk3c2ef6b48338ba8edc08369dd323973bd8b0e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:37.106826    8556 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem (1675 bytes)
	I1029 08:20:37.106863    8556 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem (1078 bytes)
	I1029 08:20:37.106883    8556 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:20:37.106908    8556 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem (1679 bytes)
	I1029 08:20:37.107465    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:20:37.125245    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:20:37.143342    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:20:37.161691    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1029 08:20:37.179846    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1029 08:20:37.198409    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:20:37.216554    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:20:37.235353    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 08:20:37.253690    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:20:37.274111    8556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 08:20:37.286938    8556 ssh_runner.go:195] Run: openssl version
	I1029 08:20:37.293253    8556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:20:37.304949    8556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:20:37.309128    8556 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:20:37.309182    8556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:20:37.342795    8556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:20:37.352034    8556 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:20:37.355987    8556 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1029 08:20:37.356083    8556 kubeadm.go:401] StartCluster: {Name:addons-306574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-306574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:20:37.356149    8556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:20:37.356191    8556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:20:37.383560    8556 cri.go:89] found id: ""
	I1029 08:20:37.383617    8556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 08:20:37.391909    8556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1029 08:20:37.400152    8556 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1029 08:20:37.400220    8556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1029 08:20:37.408405    8556 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1029 08:20:37.408445    8556 kubeadm.go:158] found existing configuration files:
	
	I1029 08:20:37.408503    8556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1029 08:20:37.416365    8556 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1029 08:20:37.416432    8556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1029 08:20:37.424158    8556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1029 08:20:37.432020    8556 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1029 08:20:37.432092    8556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1029 08:20:37.439838    8556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1029 08:20:37.447677    8556 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1029 08:20:37.447741    8556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1029 08:20:37.455342    8556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1029 08:20:37.463198    8556 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1029 08:20:37.463273    8556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1029 08:20:37.471066    8556 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1029 08:20:37.528754    8556 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1029 08:20:37.585800    8556 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1029 08:20:47.483196    8556 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1029 08:20:47.483272    8556 kubeadm.go:319] [preflight] Running pre-flight checks
	I1029 08:20:47.483355    8556 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1029 08:20:47.483402    8556 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1029 08:20:47.483435    8556 kubeadm.go:319] OS: Linux
	I1029 08:20:47.483476    8556 kubeadm.go:319] CGROUPS_CPU: enabled
	I1029 08:20:47.483559    8556 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1029 08:20:47.483633    8556 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1029 08:20:47.483674    8556 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1029 08:20:47.483731    8556 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1029 08:20:47.483808    8556 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1029 08:20:47.483884    8556 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1029 08:20:47.483936    8556 kubeadm.go:319] CGROUPS_IO: enabled
	I1029 08:20:47.484035    8556 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1029 08:20:47.484126    8556 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1029 08:20:47.484248    8556 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1029 08:20:47.484353    8556 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1029 08:20:47.486070    8556 out.go:252]   - Generating certificates and keys ...
	I1029 08:20:47.486171    8556 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1029 08:20:47.486276    8556 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1029 08:20:47.486376    8556 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1029 08:20:47.486476    8556 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1029 08:20:47.486582    8556 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1029 08:20:47.486669    8556 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1029 08:20:47.486753    8556 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1029 08:20:47.486920    8556 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-306574 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1029 08:20:47.487097    8556 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1029 08:20:47.487292    8556 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-306574 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1029 08:20:47.487421    8556 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1029 08:20:47.487534    8556 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 08:20:47.487609    8556 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 08:20:47.487711    8556 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 08:20:47.487797    8556 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 08:20:47.487886    8556 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1029 08:20:47.487985    8556 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 08:20:47.488094    8556 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 08:20:47.488151    8556 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 08:20:47.488235    8556 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 08:20:47.488296    8556 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 08:20:47.489651    8556 out.go:252]   - Booting up control plane ...
	I1029 08:20:47.489758    8556 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 08:20:47.489884    8556 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 08:20:47.489982    8556 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 08:20:47.490114    8556 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 08:20:47.490236    8556 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1029 08:20:47.490346    8556 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1029 08:20:47.490426    8556 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 08:20:47.490461    8556 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 08:20:47.490577    8556 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1029 08:20:47.490750    8556 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1029 08:20:47.490808    8556 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00181545s
	I1029 08:20:47.490942    8556 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1029 08:20:47.491082    8556 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1029 08:20:47.491215    8556 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1029 08:20:47.491296    8556 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1029 08:20:47.491365    8556 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.222338032s
	I1029 08:20:47.491426    8556 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.246054752s
	I1029 08:20:47.491484    8556 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001295374s
	I1029 08:20:47.491582    8556 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1029 08:20:47.491759    8556 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1029 08:20:47.491851    8556 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1029 08:20:47.492133    8556 kubeadm.go:319] [mark-control-plane] Marking the node addons-306574 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1029 08:20:47.492220    8556 kubeadm.go:319] [bootstrap-token] Using token: r5alvc.l4xv78fw0ie8bk9r
	I1029 08:20:47.494398    8556 out.go:252]   - Configuring RBAC rules ...
	I1029 08:20:47.494524    8556 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1029 08:20:47.494597    8556 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1029 08:20:47.494740    8556 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1029 08:20:47.494849    8556 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1029 08:20:47.494961    8556 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1029 08:20:47.495069    8556 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1029 08:20:47.495214    8556 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1029 08:20:47.495271    8556 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1029 08:20:47.495335    8556 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1029 08:20:47.495351    8556 kubeadm.go:319] 
	I1029 08:20:47.495415    8556 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1029 08:20:47.495425    8556 kubeadm.go:319] 
	I1029 08:20:47.495512    8556 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1029 08:20:47.495518    8556 kubeadm.go:319] 
	I1029 08:20:47.495544    8556 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1029 08:20:47.495631    8556 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1029 08:20:47.495685    8556 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1029 08:20:47.495692    8556 kubeadm.go:319] 
	I1029 08:20:47.495743    8556 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1029 08:20:47.495749    8556 kubeadm.go:319] 
	I1029 08:20:47.495800    8556 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1029 08:20:47.495807    8556 kubeadm.go:319] 
	I1029 08:20:47.495858    8556 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1029 08:20:47.495958    8556 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1029 08:20:47.496049    8556 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1029 08:20:47.496056    8556 kubeadm.go:319] 
	I1029 08:20:47.496141    8556 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1029 08:20:47.496217    8556 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1029 08:20:47.496223    8556 kubeadm.go:319] 
	I1029 08:20:47.496303    8556 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token r5alvc.l4xv78fw0ie8bk9r \
	I1029 08:20:47.496393    8556 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac3e7481983f892dc0d8a54808eeb48169ef741e11f757d145550a40a55b8d23 \
	I1029 08:20:47.496416    8556 kubeadm.go:319] 	--control-plane 
	I1029 08:20:47.496422    8556 kubeadm.go:319] 
	I1029 08:20:47.496500    8556 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1029 08:20:47.496506    8556 kubeadm.go:319] 
	I1029 08:20:47.496577    8556 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token r5alvc.l4xv78fw0ie8bk9r \
	I1029 08:20:47.496686    8556 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac3e7481983f892dc0d8a54808eeb48169ef741e11f757d145550a40a55b8d23 
	I1029 08:20:47.496697    8556 cni.go:84] Creating CNI manager for ""
	I1029 08:20:47.496703    8556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:20:47.498936    8556 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1029 08:20:47.500306    8556 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1029 08:20:47.504854    8556 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1029 08:20:47.504874    8556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1029 08:20:47.518778    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1029 08:20:47.725177    8556 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 08:20:47.725312    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:47.725414    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-306574 minikube.k8s.io/updated_at=2025_10_29T08_20_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac minikube.k8s.io/name=addons-306574 minikube.k8s.io/primary=true
	I1029 08:20:47.798055    8556 ops.go:34] apiserver oom_adj: -16
	I1029 08:20:47.798063    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:48.298215    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:48.799094    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:49.298146    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:49.798185    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:50.298588    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:50.798514    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:51.298161    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:51.798889    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:52.298602    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:52.366811    8556 kubeadm.go:1114] duration metric: took 4.641537882s to wait for elevateKubeSystemPrivileges
	I1029 08:20:52.366841    8556 kubeadm.go:403] duration metric: took 15.010794887s to StartCluster
	I1029 08:20:52.366862    8556 settings.go:142] acquiring lock: {Name:mk07eebd81bddcab3dc3d429be8b09770a1732f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:52.367029    8556 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 08:20:52.368137    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:52.368441    8556 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:20:52.368763    8556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1029 08:20:52.368721    8556 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1029 08:20:52.368946    8556 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-306574"
	I1029 08:20:52.368986    8556 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-306574"
	I1029 08:20:52.369021    8556 addons.go:70] Setting cloud-spanner=true in profile "addons-306574"
	I1029 08:20:52.369036    8556 addons.go:239] Setting addon cloud-spanner=true in "addons-306574"
	I1029 08:20:52.369037    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.369081    8556 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-306574"
	I1029 08:20:52.369090    8556 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-306574"
	I1029 08:20:52.369122    8556 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-306574"
	I1029 08:20:52.369154    8556 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-306574"
	I1029 08:20:52.369134    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.369163    8556 addons.go:70] Setting yakd=true in profile "addons-306574"
	I1029 08:20:52.369176    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.369178    8556 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:20:52.369199    8556 addons.go:70] Setting default-storageclass=true in profile "addons-306574"
	I1029 08:20:52.369228    8556 addons.go:239] Setting addon yakd=true in "addons-306574"
	I1029 08:20:52.369238    8556 addons.go:70] Setting metrics-server=true in profile "addons-306574"
	I1029 08:20:52.369241    8556 addons.go:70] Setting gcp-auth=true in profile "addons-306574"
	I1029 08:20:52.369263    8556 addons.go:239] Setting addon metrics-server=true in "addons-306574"
	I1029 08:20:52.369273    8556 mustload.go:66] Loading cluster: addons-306574
	I1029 08:20:52.369300    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.369314    8556 addons.go:70] Setting storage-provisioner=true in profile "addons-306574"
	I1029 08:20:52.369335    8556 addons.go:239] Setting addon storage-provisioner=true in "addons-306574"
	I1029 08:20:52.369343    8556 addons.go:70] Setting volcano=true in profile "addons-306574"
	I1029 08:20:52.369362    8556 addons.go:239] Setting addon volcano=true in "addons-306574"
	I1029 08:20:52.369371    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.369384    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.369471    8556 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:20:52.369791    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.369930    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.369933    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.369943    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.369949    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.369968    8556 addons.go:70] Setting registry=true in profile "addons-306574"
	I1029 08:20:52.369982    8556 addons.go:239] Setting addon registry=true in "addons-306574"
	I1029 08:20:52.369301    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.370448    8556 addons.go:70] Setting registry-creds=true in profile "addons-306574"
	I1029 08:20:52.370466    8556 out.go:179] * Verifying Kubernetes components...
	I1029 08:20:52.370472    8556 addons.go:239] Setting addon registry-creds=true in "addons-306574"
	I1029 08:20:52.370510    8556 addons.go:70] Setting inspektor-gadget=true in profile "addons-306574"
	I1029 08:20:52.370526    8556 addons.go:239] Setting addon inspektor-gadget=true in "addons-306574"
	I1029 08:20:52.370555    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.370605    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.369231    8556 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-306574"
	I1029 08:20:52.370869    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.370923    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.371013    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.370477    8556 addons.go:70] Setting ingress=true in profile "addons-306574"
	I1029 08:20:52.372474    8556 addons.go:239] Setting addon ingress=true in "addons-306574"
	I1029 08:20:52.372558    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.369953    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.373115    8556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:20:52.373271    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.370453    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.373796    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.370489    8556 addons.go:70] Setting ingress-dns=true in profile "addons-306574"
	I1029 08:20:52.374823    8556 addons.go:239] Setting addon ingress-dns=true in "addons-306574"
	I1029 08:20:52.370500    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.372330    8556 addons.go:70] Setting volumesnapshots=true in profile "addons-306574"
	I1029 08:20:52.372346    8556 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-306574"
	I1029 08:20:52.369179    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.374985    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.375527    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.375725    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.375743    8556 addons.go:239] Setting addon volumesnapshots=true in "addons-306574"
	I1029 08:20:52.376326    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.375755    8556 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-306574"
	I1029 08:20:52.376288    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.387385    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.388986    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.417712    8556 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 08:20:52.419161    8556 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 08:20:52.419184    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 08:20:52.419248    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.424507    8556 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1029 08:20:52.432223    8556 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1029 08:20:52.432259    8556 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1029 08:20:52.432327    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.437120    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.443875    8556 addons.go:239] Setting addon default-storageclass=true in "addons-306574"
	I1029 08:20:52.446556    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.448619    8556 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1029 08:20:52.448793    8556 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1029 08:20:52.449414    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.449628    8556 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1029 08:20:52.449644    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1029 08:20:52.449692    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.453648    8556 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1029 08:20:52.453673    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1029 08:20:52.453739    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.464150    8556 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1029 08:20:52.465244    8556 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1029 08:20:52.465244    8556 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1029 08:20:52.466169    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1029 08:20:52.466258    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.470481    8556 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1029 08:20:52.470505    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1029 08:20:52.470586    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	W1029 08:20:52.472569    8556 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1029 08:20:52.481497    8556 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-306574"
	I1029 08:20:52.481564    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.482131    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.504035    8556 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1029 08:20:52.504152    8556 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1029 08:20:52.504180    8556 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1029 08:20:52.506568    8556 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1029 08:20:52.506592    8556 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1029 08:20:52.506661    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.506846    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1029 08:20:52.506916    8556 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1029 08:20:52.506927    8556 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1029 08:20:52.506976    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.507185    8556 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1029 08:20:52.507199    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1029 08:20:52.507244    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.508343    8556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1029 08:20:52.508362    8556 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1029 08:20:52.508414    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.515794    8556 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1029 08:20:52.515877    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1029 08:20:52.519224    8556 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1029 08:20:52.520489    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1029 08:20:52.521652    8556 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1029 08:20:52.521747    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1029 08:20:52.523674    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1029 08:20:52.523852    8556 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1029 08:20:52.523876    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1029 08:20:52.523945    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.526230    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1029 08:20:52.526523    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.531830    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1029 08:20:52.531838    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.535886    8556 out.go:179]   - Using image docker.io/registry:3.0.0
	I1029 08:20:52.536922    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1029 08:20:52.536971    8556 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1029 08:20:52.542187    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1029 08:20:52.542289    8556 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1029 08:20:52.542300    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1029 08:20:52.542361    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.543362    8556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1029 08:20:52.543383    8556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1029 08:20:52.543459    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.545651    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.545717    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.554807    8556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1029 08:20:52.556590    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.557527    8556 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 08:20:52.557551    8556 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 08:20:52.557665    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.558779    8556 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1029 08:20:52.560029    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.568161    8556 out.go:179]   - Using image docker.io/busybox:stable
	I1029 08:20:52.570480    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.571195    8556 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1029 08:20:52.571217    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1029 08:20:52.571279    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.573098    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.578541    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.592748    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.595121    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.598974    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.602092    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.611277    8556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:20:52.625045    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.635401    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	W1029 08:20:52.641153    8556 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1029 08:20:52.641263    8556 retry.go:31] will retry after 283.064128ms: ssh: handshake failed: EOF
	I1029 08:20:52.736278    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1029 08:20:52.740304    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 08:20:52.752490    8556 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1029 08:20:52.752541    8556 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1029 08:20:52.757757    8556 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1029 08:20:52.757785    8556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1029 08:20:52.761085    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1029 08:20:52.779377    8556 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1029 08:20:52.779438    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1029 08:20:52.786818    8556 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1029 08:20:52.786859    8556 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1029 08:20:52.788466    8556 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1029 08:20:52.788508    8556 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1029 08:20:52.790059    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1029 08:20:52.796509    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1029 08:20:52.796538    8556 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1029 08:20:52.796553    8556 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1029 08:20:52.802963    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 08:20:52.802976    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1029 08:20:52.805653    8556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1029 08:20:52.805679    8556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1029 08:20:52.805919    8556 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:20:52.805942    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1029 08:20:52.821327    8556 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1029 08:20:52.821362    8556 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1029 08:20:52.827362    8556 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1029 08:20:52.827384    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1029 08:20:52.836135    8556 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1029 08:20:52.836236    8556 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1029 08:20:52.842197    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1029 08:20:52.842956    8556 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1029 08:20:52.842973    8556 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1029 08:20:52.856389    8556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1029 08:20:52.856413    8556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1029 08:20:52.868061    8556 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1029 08:20:52.868141    8556 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1029 08:20:52.868962    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:20:52.878437    8556 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1029 08:20:52.878542    8556 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1029 08:20:52.884271    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1029 08:20:52.901778    8556 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1029 08:20:52.901857    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1029 08:20:52.909381    8556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1029 08:20:52.909471    8556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1029 08:20:52.917745    8556 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1029 08:20:52.917769    8556 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1029 08:20:52.941732    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1029 08:20:52.978686    8556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1029 08:20:52.979065    8556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1029 08:20:52.983380    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1029 08:20:52.991718    8556 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1029 08:20:52.991798    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1029 08:20:53.029329    8556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1029 08:20:53.029355    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1029 08:20:53.030833    8556 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1029 08:20:53.032729    8556 node_ready.go:35] waiting up to 6m0s for node "addons-306574" to be "Ready" ...
	I1029 08:20:53.082101    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1029 08:20:53.122179    8556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1029 08:20:53.122213    8556 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1029 08:20:53.199344    8556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1029 08:20:53.199380    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1029 08:20:53.258286    8556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1029 08:20:53.258313    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1029 08:20:53.271229    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1029 08:20:53.281233    8556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1029 08:20:53.281321    8556 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1029 08:20:53.307571    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1029 08:20:53.536712    8556 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-306574" context rescaled to 1 replicas
	I1029 08:20:53.989071    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.25275202s)
	I1029 08:20:53.989122    8556 addons.go:480] Verifying addon ingress=true in "addons-306574"
	I1029 08:20:53.989119    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.248780728s)
	I1029 08:20:53.989241    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.199158787s)
	I1029 08:20:53.989214    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.228051694s)
	I1029 08:20:53.989319    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.192782858s)
	I1029 08:20:53.989399    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.186366573s)
	I1029 08:20:53.989416    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.186425983s)
	I1029 08:20:53.989448    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.147173088s)
	I1029 08:20:53.989543    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.120556026s)
	I1029 08:20:53.989575    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.10527748s)
	W1029 08:20:53.989580    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:53.989591    8556 addons.go:480] Verifying addon registry=true in "addons-306574"
	I1029 08:20:53.989597    8556 retry.go:31] will retry after 281.431723ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:53.989641    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.047802688s)
	I1029 08:20:53.989663    8556 addons.go:480] Verifying addon metrics-server=true in "addons-306574"
	I1029 08:20:53.989689    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.006271075s)
	I1029 08:20:53.990643    8556 out.go:179] * Verifying ingress addon...
	I1029 08:20:53.991473    8556 out.go:179] * Verifying registry addon...
	I1029 08:20:53.991487    8556 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-306574 service yakd-dashboard -n yakd-dashboard
	
	I1029 08:20:53.994200    8556 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1029 08:20:53.994209    8556 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1029 08:20:53.997159    8556 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1029 08:20:53.997254    8556 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1029 08:20:53.997271    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:54.271730    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:20:54.413835    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.331683892s)
	W1029 08:20:54.413888    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1029 08:20:54.413897    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.142635146s)
	I1029 08:20:54.413916    8556 retry.go:31] will retry after 223.011828ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1029 08:20:54.414178    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.106567738s)
	I1029 08:20:54.414209    8556 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-306574"
	I1029 08:20:54.416184    8556 out.go:179] * Verifying csi-hostpath-driver addon...
	I1029 08:20:54.418159    8556 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1029 08:20:54.420466    8556 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1029 08:20:54.420486    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:54.521686    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:54.521711    8556 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1029 08:20:54.521727    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:20:54.637314    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1029 08:20:54.902678    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:54.902714    8556 retry.go:31] will retry after 232.688774ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:54.921585    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:54.997110    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:54.997279    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:20:55.036017    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:20:55.135975    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:20:55.422233    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:55.522618    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:55.522827    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:20:55.922169    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:55.998072    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:55.998131    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:20:56.421169    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:56.521981    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:56.522198    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:20:56.921479    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:56.997457    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:56.997532    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:20:57.151706    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.514344936s)
	I1029 08:20:57.151793    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.015749959s)
	W1029 08:20:57.151828    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:57.151854    8556 retry.go:31] will retry after 397.704638ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:57.421254    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:57.521679    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:57.521931    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:20:57.535454    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:20:57.550710    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:20:57.921319    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:57.997284    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:57.997515    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:20:58.088058    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:58.088092    8556 retry.go:31] will retry after 632.016127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:58.422630    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:58.523181    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:58.523370    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:20:58.721037    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:20:58.921676    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:58.997295    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:58.997446    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:20:59.258509    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:59.258539    8556 retry.go:31] will retry after 1.52680531s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:59.421933    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:59.523011    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:59.523018    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:20:59.535540    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:20:59.921808    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:59.997516    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:59.997546    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:00.055187    8556 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1029 08:21:00.055254    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:21:00.073503    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:21:00.190419    8556 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1029 08:21:00.204449    8556 addons.go:239] Setting addon gcp-auth=true in "addons-306574"
	I1029 08:21:00.204500    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:21:00.204851    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:21:00.223226    8556 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1029 08:21:00.223289    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:21:00.242686    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:21:00.342194    8556 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1029 08:21:00.343553    8556 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1029 08:21:00.344668    8556 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1029 08:21:00.344688    8556 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1029 08:21:00.358655    8556 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1029 08:21:00.358680    8556 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1029 08:21:00.371732    8556 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1029 08:21:00.371752    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1029 08:21:00.384679    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1029 08:21:00.422089    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:00.497682    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:00.497843    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:00.697361    8556 addons.go:480] Verifying addon gcp-auth=true in "addons-306574"
	I1029 08:21:00.698633    8556 out.go:179] * Verifying gcp-auth addon...
	I1029 08:21:00.701222    8556 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1029 08:21:00.703778    8556 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1029 08:21:00.703795    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:00.786113    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:00.921778    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:00.997684    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:00.997770    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:01.204735    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1029 08:21:01.339832    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:01.339864    8556 retry.go:31] will retry after 2.504972298s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:01.421573    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:01.497160    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:01.497355    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:01.535971    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:01.704853    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:01.921958    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:01.997724    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:01.997892    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:02.203943    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:02.421734    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:02.497519    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:02.497711    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:02.704829    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:02.921855    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:02.997471    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:02.997666    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:03.203934    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:03.421805    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:03.497527    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:03.497707    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:03.536257    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:03.703769    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:03.846031    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:03.921825    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:03.997642    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:03.997759    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:04.205034    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1029 08:21:04.385861    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:04.385895    8556 retry.go:31] will retry after 3.240460661s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:04.421427    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:04.496978    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:04.497155    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:04.704392    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:04.921301    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:04.996802    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:04.996936    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:05.205313    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:05.421154    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:05.497816    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:05.497950    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:05.703827    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:05.922093    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:05.997645    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:05.997801    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:06.036443    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:06.204282    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:06.420928    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:06.497452    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:06.497655    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:06.704553    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:06.921242    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:06.996833    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:06.997092    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:07.204253    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:07.421408    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:07.497241    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:07.497297    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:07.627108    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:07.704395    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:07.921665    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:07.998217    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:07.998389    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:08.171982    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:08.172036    8556 retry.go:31] will retry after 5.626189077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:08.204932    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:08.421925    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:08.497324    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:08.497473    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:08.536098    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:08.704723    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:08.921737    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:08.997326    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:08.997483    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:09.204858    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:09.421526    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:09.497394    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:09.497456    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:09.704887    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:09.922212    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:09.997866    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:09.997981    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:10.203960    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:10.421803    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:10.497608    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:10.497633    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:10.704826    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:10.921499    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:10.997142    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:10.997327    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:11.035582    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:11.204224    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:11.420854    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:11.497449    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:11.497618    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:11.705090    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:11.922094    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:11.997788    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:11.997935    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:12.204206    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:12.421043    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:12.497877    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:12.498155    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:12.704065    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:12.921866    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:12.997568    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:12.997718    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:13.036123    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:13.205129    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:13.420719    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:13.497389    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:13.497520    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:13.704768    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:13.798966    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:13.922018    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:14.000317    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:14.000518    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:14.204074    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1029 08:21:14.341164    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:14.341196    8556 retry.go:31] will retry after 9.005876741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:14.420499    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:14.496889    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:14.497083    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:14.703823    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:14.921524    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:14.997296    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:14.997475    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:15.204113    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:15.421184    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:15.497820    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:15.498019    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:15.535368    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:15.703868    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:15.921683    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:15.997342    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:15.997490    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:16.204844    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:16.421529    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:16.497121    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:16.497294    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:16.704081    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:16.921722    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:16.997160    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:16.997277    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:17.204339    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:17.421211    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:17.497648    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:17.497920    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:17.536348    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:17.705222    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:17.921135    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:17.997830    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:17.997905    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:18.204151    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:18.420912    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:18.497418    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:18.497550    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:18.704676    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:18.921835    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:18.997450    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:18.997659    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:19.204705    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:19.421316    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:19.496761    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:19.496875    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:19.704341    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:19.921290    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:19.997081    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:19.997099    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:20.035387    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:20.204118    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:20.422031    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:20.497698    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:20.497808    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:20.704100    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:20.922048    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:20.997827    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:20.997906    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:21.204697    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:21.421544    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:21.497109    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:21.497298    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:21.704401    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:21.921234    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:21.998101    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:21.998259    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:22.035871    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:22.204647    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:22.421404    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:22.497634    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:22.497702    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:22.704152    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:22.920890    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:22.997612    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:22.997847    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:23.204737    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:23.347976    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:23.421312    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:23.497156    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:23.497211    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:23.704401    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1029 08:21:23.888864    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:23.888894    8556 retry.go:31] will retry after 11.978787272s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:23.921452    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:23.997413    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:23.997567    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:24.036153    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:24.205126    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:24.421936    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:24.497322    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:24.497484    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:24.704656    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:24.921600    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:24.997041    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:24.997216    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:25.204715    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:25.421566    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:25.497396    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:25.497514    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:25.704749    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:25.921620    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:25.997044    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:25.997096    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:26.204254    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:26.420713    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:26.497386    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:26.497627    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:26.536125    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:26.705007    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:26.921722    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:26.997674    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:26.997700    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:27.204264    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:27.421113    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:27.497570    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:27.497789    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:27.704856    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:27.921957    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:27.997710    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:27.997858    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:28.204300    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:28.420907    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:28.497646    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:28.497696    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:28.536583    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:28.704399    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:28.921139    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:28.998080    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:28.998084    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:29.204186    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:29.420860    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:29.497659    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:29.497707    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:29.704732    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:29.921793    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:29.997614    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:29.997755    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:30.204058    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:30.421926    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:30.497461    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:30.497623    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:30.704786    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:30.921708    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:30.997174    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:30.997226    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:31.035621    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:31.204458    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:31.421331    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:31.496758    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:31.497078    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:31.704192    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:31.920956    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:31.997697    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:31.997855    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:32.204484    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:32.421154    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:32.497045    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:32.497197    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:32.704843    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:32.921769    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:32.997424    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:32.997583    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:33.035972    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:33.204675    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:33.421491    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:33.497225    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:33.497296    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:33.537550    8556 node_ready.go:49] node "addons-306574" is "Ready"
	I1029 08:21:33.537586    8556 node_ready.go:38] duration metric: took 40.504833878s for node "addons-306574" to be "Ready" ...
	I1029 08:21:33.537607    8556 api_server.go:52] waiting for apiserver process to appear ...
	I1029 08:21:33.537665    8556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:21:33.559200    8556 api_server.go:72] duration metric: took 41.19072057s to wait for apiserver process to appear ...
	I1029 08:21:33.559238    8556 api_server.go:88] waiting for apiserver healthz status ...
	I1029 08:21:33.559265    8556 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1029 08:21:33.564640    8556 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1029 08:21:33.565566    8556 api_server.go:141] control plane version: v1.34.1
	I1029 08:21:33.565599    8556 api_server.go:131] duration metric: took 6.35296ms to wait for apiserver health ...
	I1029 08:21:33.565610    8556 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 08:21:33.573845    8556 system_pods.go:59] 20 kube-system pods found
	I1029 08:21:33.573897    8556 system_pods.go:61] "amd-gpu-device-plugin-f4ngl" [ffb7bad7-9c62-431a-b7cc-47e06a813d29] Pending
	I1029 08:21:33.573906    8556 system_pods.go:61] "coredns-66bc5c9577-9jrct" [43803c1e-fe5e-43b9-9e0f-df62b0764904] Pending
	I1029 08:21:33.573911    8556 system_pods.go:61] "csi-hostpath-attacher-0" [80b28978-6d9b-44b3-ae61-e6d05d1fae29] Pending
	I1029 08:21:33.573916    8556 system_pods.go:61] "csi-hostpath-resizer-0" [e250b563-2767-4aa8-8de1-4cf2211c0238] Pending
	I1029 08:21:33.573921    8556 system_pods.go:61] "csi-hostpathplugin-jqbm2" [2f4f21f6-82ff-454b-9636-d9b80db3d007] Pending
	I1029 08:21:33.573926    8556 system_pods.go:61] "etcd-addons-306574" [2d22f0e7-4a8a-4b1f-bcfe-37a56ffaf97b] Running
	I1029 08:21:33.573931    8556 system_pods.go:61] "kindnet-nsf4w" [3b3cff7c-560b-4e94-befb-6d1a2d7ded72] Running
	I1029 08:21:33.573936    8556 system_pods.go:61] "kube-apiserver-addons-306574" [8644b6d0-b927-49dd-be65-b2a2282e5849] Running
	I1029 08:21:33.573941    8556 system_pods.go:61] "kube-controller-manager-addons-306574" [ff89d419-2134-4784-9737-e1bec24c6c08] Running
	I1029 08:21:33.573953    8556 system_pods.go:61] "kube-ingress-dns-minikube" [dc5542d8-6a31-4125-b723-12c2c3526b2d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:21:33.573958    8556 system_pods.go:61] "kube-proxy-6gp9v" [cd5d46dd-530d-4538-8525-bd6a713f5446] Running
	I1029 08:21:33.573965    8556 system_pods.go:61] "kube-scheduler-addons-306574" [0542d0b2-295a-4228-b4c3-18abd5038bb8] Running
	I1029 08:21:33.573969    8556 system_pods.go:61] "metrics-server-85b7d694d7-nsm7j" [d5e58e21-27a8-443a-87dd-b092fa4d1169] Pending
	I1029 08:21:33.573973    8556 system_pods.go:61] "nvidia-device-plugin-daemonset-fm5xc" [06902152-4c44-414b-afca-bd97070f4a44] Pending
	I1029 08:21:33.573981    8556 system_pods.go:61] "registry-6b586f9694-782gg" [d6b59cbc-13f3-4137-ada6-66822061f960] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:21:33.573986    8556 system_pods.go:61] "registry-creds-764b6fb674-s8s7q" [1041cb13-458e-46e5-8f69-a740c85ba5df] Pending
	I1029 08:21:33.574009    8556 system_pods.go:61] "registry-proxy-b9mf9" [73f9106c-8bd1-4a4c-9389-08df4ebf334e] Pending
	I1029 08:21:33.574017    8556 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2phmj" [6760de76-3472-48ff-a420-e4e8e8f1036d] Pending
	I1029 08:21:33.574025    8556 system_pods.go:61] "snapshot-controller-7d9fbc56b8-v4lqk" [916ac34d-f7fd-4dba-b606-7b2908081c16] Pending
	I1029 08:21:33.574033    8556 system_pods.go:61] "storage-provisioner" [46b80bc7-2bf0-4a9e-a163-fadabac69f7b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:21:33.574043    8556 system_pods.go:74] duration metric: took 8.423698ms to wait for pod list to return data ...
	I1029 08:21:33.574055    8556 default_sa.go:34] waiting for default service account to be created ...
	I1029 08:21:33.576533    8556 default_sa.go:45] found service account: "default"
	I1029 08:21:33.576561    8556 default_sa.go:55] duration metric: took 2.498343ms for default service account to be created ...
	I1029 08:21:33.576573    8556 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 08:21:33.582822    8556 system_pods.go:86] 20 kube-system pods found
	I1029 08:21:33.582856    8556 system_pods.go:89] "amd-gpu-device-plugin-f4ngl" [ffb7bad7-9c62-431a-b7cc-47e06a813d29] Pending
	I1029 08:21:33.582867    8556 system_pods.go:89] "coredns-66bc5c9577-9jrct" [43803c1e-fe5e-43b9-9e0f-df62b0764904] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:21:33.582874    8556 system_pods.go:89] "csi-hostpath-attacher-0" [80b28978-6d9b-44b3-ae61-e6d05d1fae29] Pending
	I1029 08:21:33.582881    8556 system_pods.go:89] "csi-hostpath-resizer-0" [e250b563-2767-4aa8-8de1-4cf2211c0238] Pending
	I1029 08:21:33.582886    8556 system_pods.go:89] "csi-hostpathplugin-jqbm2" [2f4f21f6-82ff-454b-9636-d9b80db3d007] Pending
	I1029 08:21:33.582890    8556 system_pods.go:89] "etcd-addons-306574" [2d22f0e7-4a8a-4b1f-bcfe-37a56ffaf97b] Running
	I1029 08:21:33.582897    8556 system_pods.go:89] "kindnet-nsf4w" [3b3cff7c-560b-4e94-befb-6d1a2d7ded72] Running
	I1029 08:21:33.582903    8556 system_pods.go:89] "kube-apiserver-addons-306574" [8644b6d0-b927-49dd-be65-b2a2282e5849] Running
	I1029 08:21:33.582909    8556 system_pods.go:89] "kube-controller-manager-addons-306574" [ff89d419-2134-4784-9737-e1bec24c6c08] Running
	I1029 08:21:33.582923    8556 system_pods.go:89] "kube-ingress-dns-minikube" [dc5542d8-6a31-4125-b723-12c2c3526b2d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:21:33.582932    8556 system_pods.go:89] "kube-proxy-6gp9v" [cd5d46dd-530d-4538-8525-bd6a713f5446] Running
	I1029 08:21:33.582940    8556 system_pods.go:89] "kube-scheduler-addons-306574" [0542d0b2-295a-4228-b4c3-18abd5038bb8] Running
	I1029 08:21:33.582951    8556 system_pods.go:89] "metrics-server-85b7d694d7-nsm7j" [d5e58e21-27a8-443a-87dd-b092fa4d1169] Pending
	I1029 08:21:33.582960    8556 system_pods.go:89] "nvidia-device-plugin-daemonset-fm5xc" [06902152-4c44-414b-afca-bd97070f4a44] Pending
	I1029 08:21:33.582968    8556 system_pods.go:89] "registry-6b586f9694-782gg" [d6b59cbc-13f3-4137-ada6-66822061f960] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:21:33.582977    8556 system_pods.go:89] "registry-creds-764b6fb674-s8s7q" [1041cb13-458e-46e5-8f69-a740c85ba5df] Pending
	I1029 08:21:33.582984    8556 system_pods.go:89] "registry-proxy-b9mf9" [73f9106c-8bd1-4a4c-9389-08df4ebf334e] Pending
	I1029 08:21:33.583003    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2phmj" [6760de76-3472-48ff-a420-e4e8e8f1036d] Pending
	I1029 08:21:33.583010    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v4lqk" [916ac34d-f7fd-4dba-b606-7b2908081c16] Pending
	I1029 08:21:33.583018    8556 system_pods.go:89] "storage-provisioner" [46b80bc7-2bf0-4a9e-a163-fadabac69f7b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:21:33.583036    8556 retry.go:31] will retry after 262.672298ms: missing components: kube-dns
	I1029 08:21:33.704875    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:33.852060    8556 system_pods.go:86] 20 kube-system pods found
	I1029 08:21:33.852112    8556 system_pods.go:89] "amd-gpu-device-plugin-f4ngl" [ffb7bad7-9c62-431a-b7cc-47e06a813d29] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1029 08:21:33.852123    8556 system_pods.go:89] "coredns-66bc5c9577-9jrct" [43803c1e-fe5e-43b9-9e0f-df62b0764904] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:21:33.852133    8556 system_pods.go:89] "csi-hostpath-attacher-0" [80b28978-6d9b-44b3-ae61-e6d05d1fae29] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:21:33.852144    8556 system_pods.go:89] "csi-hostpath-resizer-0" [e250b563-2767-4aa8-8de1-4cf2211c0238] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:21:33.852227    8556 system_pods.go:89] "csi-hostpathplugin-jqbm2" [2f4f21f6-82ff-454b-9636-d9b80db3d007] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:21:33.852261    8556 system_pods.go:89] "etcd-addons-306574" [2d22f0e7-4a8a-4b1f-bcfe-37a56ffaf97b] Running
	I1029 08:21:33.852272    8556 system_pods.go:89] "kindnet-nsf4w" [3b3cff7c-560b-4e94-befb-6d1a2d7ded72] Running
	I1029 08:21:33.852277    8556 system_pods.go:89] "kube-apiserver-addons-306574" [8644b6d0-b927-49dd-be65-b2a2282e5849] Running
	I1029 08:21:33.852283    8556 system_pods.go:89] "kube-controller-manager-addons-306574" [ff89d419-2134-4784-9737-e1bec24c6c08] Running
	I1029 08:21:33.852324    8556 system_pods.go:89] "kube-ingress-dns-minikube" [dc5542d8-6a31-4125-b723-12c2c3526b2d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:21:33.852352    8556 system_pods.go:89] "kube-proxy-6gp9v" [cd5d46dd-530d-4538-8525-bd6a713f5446] Running
	I1029 08:21:33.852359    8556 system_pods.go:89] "kube-scheduler-addons-306574" [0542d0b2-295a-4228-b4c3-18abd5038bb8] Running
	I1029 08:21:33.852392    8556 system_pods.go:89] "metrics-server-85b7d694d7-nsm7j" [d5e58e21-27a8-443a-87dd-b092fa4d1169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:21:33.852403    8556 system_pods.go:89] "nvidia-device-plugin-daemonset-fm5xc" [06902152-4c44-414b-afca-bd97070f4a44] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:21:33.852415    8556 system_pods.go:89] "registry-6b586f9694-782gg" [d6b59cbc-13f3-4137-ada6-66822061f960] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:21:33.852427    8556 system_pods.go:89] "registry-creds-764b6fb674-s8s7q" [1041cb13-458e-46e5-8f69-a740c85ba5df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:21:33.852435    8556 system_pods.go:89] "registry-proxy-b9mf9" [73f9106c-8bd1-4a4c-9389-08df4ebf334e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:21:33.852444    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2phmj" [6760de76-3472-48ff-a420-e4e8e8f1036d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:21:33.852484    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v4lqk" [916ac34d-f7fd-4dba-b606-7b2908081c16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:21:33.852500    8556 system_pods.go:89] "storage-provisioner" [46b80bc7-2bf0-4a9e-a163-fadabac69f7b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:21:33.852528    8556 retry.go:31] will retry after 258.882234ms: missing components: kube-dns
	I1029 08:21:33.950827    8556 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1029 08:21:33.950854    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:34.051215    8556 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1029 08:21:34.051239    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:34.051279    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:34.116041    8556 system_pods.go:86] 20 kube-system pods found
	I1029 08:21:34.116073    8556 system_pods.go:89] "amd-gpu-device-plugin-f4ngl" [ffb7bad7-9c62-431a-b7cc-47e06a813d29] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1029 08:21:34.116080    8556 system_pods.go:89] "coredns-66bc5c9577-9jrct" [43803c1e-fe5e-43b9-9e0f-df62b0764904] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:21:34.116088    8556 system_pods.go:89] "csi-hostpath-attacher-0" [80b28978-6d9b-44b3-ae61-e6d05d1fae29] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:21:34.116100    8556 system_pods.go:89] "csi-hostpath-resizer-0" [e250b563-2767-4aa8-8de1-4cf2211c0238] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:21:34.116105    8556 system_pods.go:89] "csi-hostpathplugin-jqbm2" [2f4f21f6-82ff-454b-9636-d9b80db3d007] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:21:34.116110    8556 system_pods.go:89] "etcd-addons-306574" [2d22f0e7-4a8a-4b1f-bcfe-37a56ffaf97b] Running
	I1029 08:21:34.116115    8556 system_pods.go:89] "kindnet-nsf4w" [3b3cff7c-560b-4e94-befb-6d1a2d7ded72] Running
	I1029 08:21:34.116118    8556 system_pods.go:89] "kube-apiserver-addons-306574" [8644b6d0-b927-49dd-be65-b2a2282e5849] Running
	I1029 08:21:34.116122    8556 system_pods.go:89] "kube-controller-manager-addons-306574" [ff89d419-2134-4784-9737-e1bec24c6c08] Running
	I1029 08:21:34.116127    8556 system_pods.go:89] "kube-ingress-dns-minikube" [dc5542d8-6a31-4125-b723-12c2c3526b2d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:21:34.116130    8556 system_pods.go:89] "kube-proxy-6gp9v" [cd5d46dd-530d-4538-8525-bd6a713f5446] Running
	I1029 08:21:34.116134    8556 system_pods.go:89] "kube-scheduler-addons-306574" [0542d0b2-295a-4228-b4c3-18abd5038bb8] Running
	I1029 08:21:34.116138    8556 system_pods.go:89] "metrics-server-85b7d694d7-nsm7j" [d5e58e21-27a8-443a-87dd-b092fa4d1169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:21:34.116144    8556 system_pods.go:89] "nvidia-device-plugin-daemonset-fm5xc" [06902152-4c44-414b-afca-bd97070f4a44] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:21:34.116150    8556 system_pods.go:89] "registry-6b586f9694-782gg" [d6b59cbc-13f3-4137-ada6-66822061f960] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:21:34.116155    8556 system_pods.go:89] "registry-creds-764b6fb674-s8s7q" [1041cb13-458e-46e5-8f69-a740c85ba5df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:21:34.116159    8556 system_pods.go:89] "registry-proxy-b9mf9" [73f9106c-8bd1-4a4c-9389-08df4ebf334e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:21:34.116167    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2phmj" [6760de76-3472-48ff-a420-e4e8e8f1036d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:21:34.116172    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v4lqk" [916ac34d-f7fd-4dba-b606-7b2908081c16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:21:34.116177    8556 system_pods.go:89] "storage-provisioner" [46b80bc7-2bf0-4a9e-a163-fadabac69f7b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:21:34.116193    8556 retry.go:31] will retry after 486.917132ms: missing components: kube-dns
	I1029 08:21:34.204214    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:34.421812    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:34.498862    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:34.499813    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:34.608079    8556 system_pods.go:86] 20 kube-system pods found
	I1029 08:21:34.608118    8556 system_pods.go:89] "amd-gpu-device-plugin-f4ngl" [ffb7bad7-9c62-431a-b7cc-47e06a813d29] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1029 08:21:34.608129    8556 system_pods.go:89] "coredns-66bc5c9577-9jrct" [43803c1e-fe5e-43b9-9e0f-df62b0764904] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:21:34.608141    8556 system_pods.go:89] "csi-hostpath-attacher-0" [80b28978-6d9b-44b3-ae61-e6d05d1fae29] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:21:34.608149    8556 system_pods.go:89] "csi-hostpath-resizer-0" [e250b563-2767-4aa8-8de1-4cf2211c0238] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:21:34.608161    8556 system_pods.go:89] "csi-hostpathplugin-jqbm2" [2f4f21f6-82ff-454b-9636-d9b80db3d007] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:21:34.608171    8556 system_pods.go:89] "etcd-addons-306574" [2d22f0e7-4a8a-4b1f-bcfe-37a56ffaf97b] Running
	I1029 08:21:34.608181    8556 system_pods.go:89] "kindnet-nsf4w" [3b3cff7c-560b-4e94-befb-6d1a2d7ded72] Running
	I1029 08:21:34.608190    8556 system_pods.go:89] "kube-apiserver-addons-306574" [8644b6d0-b927-49dd-be65-b2a2282e5849] Running
	I1029 08:21:34.608196    8556 system_pods.go:89] "kube-controller-manager-addons-306574" [ff89d419-2134-4784-9737-e1bec24c6c08] Running
	I1029 08:21:34.608207    8556 system_pods.go:89] "kube-ingress-dns-minikube" [dc5542d8-6a31-4125-b723-12c2c3526b2d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:21:34.608215    8556 system_pods.go:89] "kube-proxy-6gp9v" [cd5d46dd-530d-4538-8525-bd6a713f5446] Running
	I1029 08:21:34.608221    8556 system_pods.go:89] "kube-scheduler-addons-306574" [0542d0b2-295a-4228-b4c3-18abd5038bb8] Running
	I1029 08:21:34.608231    8556 system_pods.go:89] "metrics-server-85b7d694d7-nsm7j" [d5e58e21-27a8-443a-87dd-b092fa4d1169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:21:34.608242    8556 system_pods.go:89] "nvidia-device-plugin-daemonset-fm5xc" [06902152-4c44-414b-afca-bd97070f4a44] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:21:34.608254    8556 system_pods.go:89] "registry-6b586f9694-782gg" [d6b59cbc-13f3-4137-ada6-66822061f960] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:21:34.608266    8556 system_pods.go:89] "registry-creds-764b6fb674-s8s7q" [1041cb13-458e-46e5-8f69-a740c85ba5df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:21:34.608278    8556 system_pods.go:89] "registry-proxy-b9mf9" [73f9106c-8bd1-4a4c-9389-08df4ebf334e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:21:34.608291    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2phmj" [6760de76-3472-48ff-a420-e4e8e8f1036d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:21:34.608308    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v4lqk" [916ac34d-f7fd-4dba-b606-7b2908081c16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:21:34.608318    8556 system_pods.go:89] "storage-provisioner" [46b80bc7-2bf0-4a9e-a163-fadabac69f7b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:21:34.608338    8556 retry.go:31] will retry after 435.141221ms: missing components: kube-dns
	I1029 08:21:34.704069    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:34.922521    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:34.997928    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:34.998092    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:35.051415    8556 system_pods.go:86] 20 kube-system pods found
	I1029 08:21:35.051470    8556 system_pods.go:89] "amd-gpu-device-plugin-f4ngl" [ffb7bad7-9c62-431a-b7cc-47e06a813d29] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1029 08:21:35.051481    8556 system_pods.go:89] "coredns-66bc5c9577-9jrct" [43803c1e-fe5e-43b9-9e0f-df62b0764904] Running
	I1029 08:21:35.051494    8556 system_pods.go:89] "csi-hostpath-attacher-0" [80b28978-6d9b-44b3-ae61-e6d05d1fae29] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:21:35.051503    8556 system_pods.go:89] "csi-hostpath-resizer-0" [e250b563-2767-4aa8-8de1-4cf2211c0238] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:21:35.051513    8556 system_pods.go:89] "csi-hostpathplugin-jqbm2" [2f4f21f6-82ff-454b-9636-d9b80db3d007] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:21:35.051521    8556 system_pods.go:89] "etcd-addons-306574" [2d22f0e7-4a8a-4b1f-bcfe-37a56ffaf97b] Running
	I1029 08:21:35.051528    8556 system_pods.go:89] "kindnet-nsf4w" [3b3cff7c-560b-4e94-befb-6d1a2d7ded72] Running
	I1029 08:21:35.051534    8556 system_pods.go:89] "kube-apiserver-addons-306574" [8644b6d0-b927-49dd-be65-b2a2282e5849] Running
	I1029 08:21:35.051541    8556 system_pods.go:89] "kube-controller-manager-addons-306574" [ff89d419-2134-4784-9737-e1bec24c6c08] Running
	I1029 08:21:35.051550    8556 system_pods.go:89] "kube-ingress-dns-minikube" [dc5542d8-6a31-4125-b723-12c2c3526b2d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:21:35.051556    8556 system_pods.go:89] "kube-proxy-6gp9v" [cd5d46dd-530d-4538-8525-bd6a713f5446] Running
	I1029 08:21:35.051562    8556 system_pods.go:89] "kube-scheduler-addons-306574" [0542d0b2-295a-4228-b4c3-18abd5038bb8] Running
	I1029 08:21:35.051570    8556 system_pods.go:89] "metrics-server-85b7d694d7-nsm7j" [d5e58e21-27a8-443a-87dd-b092fa4d1169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:21:35.051579    8556 system_pods.go:89] "nvidia-device-plugin-daemonset-fm5xc" [06902152-4c44-414b-afca-bd97070f4a44] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:21:35.051589    8556 system_pods.go:89] "registry-6b586f9694-782gg" [d6b59cbc-13f3-4137-ada6-66822061f960] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:21:35.051597    8556 system_pods.go:89] "registry-creds-764b6fb674-s8s7q" [1041cb13-458e-46e5-8f69-a740c85ba5df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:21:35.051606    8556 system_pods.go:89] "registry-proxy-b9mf9" [73f9106c-8bd1-4a4c-9389-08df4ebf334e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:21:35.051616    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2phmj" [6760de76-3472-48ff-a420-e4e8e8f1036d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:21:35.051633    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v4lqk" [916ac34d-f7fd-4dba-b606-7b2908081c16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:21:35.051640    8556 system_pods.go:89] "storage-provisioner" [46b80bc7-2bf0-4a9e-a163-fadabac69f7b] Running
	I1029 08:21:35.051658    8556 system_pods.go:126] duration metric: took 1.475077811s to wait for k8s-apps to be running ...
	I1029 08:21:35.051670    8556 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 08:21:35.051724    8556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:21:35.071246    8556 system_svc.go:56] duration metric: took 19.56463ms WaitForService to wait for kubelet
	I1029 08:21:35.071282    8556 kubeadm.go:587] duration metric: took 42.702809602s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:21:35.071307    8556 node_conditions.go:102] verifying NodePressure condition ...
	I1029 08:21:35.074954    8556 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 08:21:35.075015    8556 node_conditions.go:123] node cpu capacity is 8
	I1029 08:21:35.075034    8556 node_conditions.go:105] duration metric: took 3.721417ms to run NodePressure ...
	I1029 08:21:35.075051    8556 start.go:242] waiting for startup goroutines ...
	I1029 08:21:35.205097    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:35.422632    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:35.497591    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:35.497762    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:35.704532    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:35.868698    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:35.922226    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:35.998217    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:35.998328    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:36.204744    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:36.422784    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:36.522869    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:36.523005    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:36.524730    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:36.524759    8556 retry.go:31] will retry after 8.489429665s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:36.704707    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:36.921984    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:36.997917    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:36.998153    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:37.207683    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:37.423270    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:37.499173    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:37.499415    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:37.704675    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:37.922489    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:37.997974    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:37.997985    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:38.205567    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:38.422120    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:38.498167    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:38.498187    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:38.705132    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:38.921768    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:38.997742    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:38.997961    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:39.204341    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:39.422391    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:39.498054    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:39.498324    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:39.705090    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:39.921674    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:39.998137    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:39.998408    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:40.205620    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:40.421874    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:40.497546    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:40.497700    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:40.705116    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:40.922802    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:40.997746    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:40.997783    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:41.274085    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:41.422417    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:41.523108    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:41.523112    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:41.705051    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:41.922461    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:41.998292    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:41.998379    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:42.205525    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:42.422141    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:42.497926    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:42.497968    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:42.705193    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:42.922410    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:43.023042    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:43.023269    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:43.204557    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:43.421462    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:43.497360    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:43.497382    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:43.704408    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:43.922450    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:43.998097    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:43.998932    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:44.205161    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:44.421478    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:44.497350    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:44.497585    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:44.704664    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:44.922472    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:44.997205    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:44.997242    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:45.014319    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:45.204919    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:45.422041    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:45.497350    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:45.497389    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:45.689892    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:45.689924    8556 retry.go:31] will retry after 14.552494066s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:45.704631    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:45.922358    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:45.998499    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:45.998525    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:46.204733    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:46.422178    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:46.523523    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:46.523660    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:46.704327    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:46.921185    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:46.998058    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:46.998093    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:47.204429    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:47.422264    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:47.498356    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:47.498382    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:47.704070    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:47.922823    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:47.998286    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:47.998375    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:48.204633    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:48.422108    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:48.497868    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:48.497908    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:48.704942    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:48.921957    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:48.997958    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:48.998366    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:49.205113    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:49.422772    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:49.497841    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:49.497898    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:49.705077    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:49.921456    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:49.997528    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:49.997614    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:50.205240    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:50.421403    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:50.498176    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:50.498324    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:50.704627    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:50.922441    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:50.998127    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:50.998193    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:51.204466    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:51.421869    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:51.497617    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:51.497766    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:51.704964    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:51.921681    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:52.024233    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:52.024424    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:52.205176    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:52.421667    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:52.497919    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:52.497980    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:52.704733    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:52.922155    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:52.998148    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:52.998382    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:53.205448    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:53.421924    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:53.497725    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:53.497775    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:53.704747    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:53.921897    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:53.998317    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:53.998342    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:54.204181    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:54.421218    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:54.498431    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:54.498477    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:54.703938    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:54.922680    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:54.997515    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:54.997601    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:55.204878    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:55.424417    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:55.497659    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:55.497705    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:55.704198    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:55.978599    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:55.997558    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:55.997629    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:56.214448    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:56.422187    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:56.497825    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:56.498018    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:56.704913    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:56.921475    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:56.997253    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:56.997272    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:57.203661    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:57.422284    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:57.498160    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:57.498228    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:57.704136    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:57.921421    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:57.997870    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:57.998011    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:58.204785    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:58.422302    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:58.497707    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:58.497797    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:58.704730    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:58.921940    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:58.998513    8556 kapi.go:107] duration metric: took 1m5.004309642s to wait for kubernetes.io/minikube-addons=registry ...
	I1029 08:21:58.998697    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:59.204752    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:59.422048    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:59.497708    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:59.704126    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:59.921462    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:59.997965    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:00.204846    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:00.242931    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:00.423687    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:00.500117    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:00.704157    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:00.923402    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:01.024077    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:22:01.148012    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:01.148043    8556 retry.go:31] will retry after 34.092481654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:01.204934    8556 kapi.go:107] duration metric: took 1m0.503711628s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1029 08:22:01.207092    8556 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-306574 cluster.
	I1029 08:22:01.208217    8556 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1029 08:22:01.209247    8556 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1029 08:22:01.421985    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:01.497743    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:01.922213    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:01.998138    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:02.421491    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:02.497703    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:02.921899    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:02.997430    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:03.421827    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:03.497760    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:03.922305    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:04.000661    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:04.421732    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:04.497633    8556 kapi.go:107] duration metric: took 1m10.503421639s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1029 08:22:04.922252    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:05.421366    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:05.921709    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:06.422085    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:06.921434    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:07.421497    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:07.922304    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:08.422337    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:08.921639    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:09.422280    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:09.921466    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:10.421676    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:10.922421    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:11.421542    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:11.922366    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:12.422498    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:12.922094    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:13.421408    8556 kapi.go:107] duration metric: took 1m19.003249808s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1029 08:22:35.241334    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1029 08:22:35.782789    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 08:22:35.782881    8556 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1029 08:22:35.784629    8556 out.go:179] * Enabled addons: storage-provisioner, nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, registry-creds, cloud-spanner, metrics-server, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1029 08:22:35.785925    8556 addons.go:515] duration metric: took 1m43.417207717s for enable addons: enabled=[storage-provisioner nvidia-device-plugin amd-gpu-device-plugin ingress-dns registry-creds cloud-spanner metrics-server yakd default-storageclass storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1029 08:22:35.785964    8556 start.go:247] waiting for cluster config update ...
	I1029 08:22:35.785984    8556 start.go:256] writing updated cluster config ...
	I1029 08:22:35.786258    8556 ssh_runner.go:195] Run: rm -f paused
	I1029 08:22:35.790143    8556 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 08:22:35.793571    8556 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9jrct" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:35.797938    8556 pod_ready.go:94] pod "coredns-66bc5c9577-9jrct" is "Ready"
	I1029 08:22:35.797962    8556 pod_ready.go:86] duration metric: took 4.37412ms for pod "coredns-66bc5c9577-9jrct" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:35.799936    8556 pod_ready.go:83] waiting for pod "etcd-addons-306574" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:35.803938    8556 pod_ready.go:94] pod "etcd-addons-306574" is "Ready"
	I1029 08:22:35.803962    8556 pod_ready.go:86] duration metric: took 4.002476ms for pod "etcd-addons-306574" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:35.805834    8556 pod_ready.go:83] waiting for pod "kube-apiserver-addons-306574" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:35.809513    8556 pod_ready.go:94] pod "kube-apiserver-addons-306574" is "Ready"
	I1029 08:22:35.809536    8556 pod_ready.go:86] duration metric: took 3.677568ms for pod "kube-apiserver-addons-306574" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:35.811431    8556 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-306574" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:36.194693    8556 pod_ready.go:94] pod "kube-controller-manager-addons-306574" is "Ready"
	I1029 08:22:36.194731    8556 pod_ready.go:86] duration metric: took 383.260397ms for pod "kube-controller-manager-addons-306574" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:36.393780    8556 pod_ready.go:83] waiting for pod "kube-proxy-6gp9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:36.793912    8556 pod_ready.go:94] pod "kube-proxy-6gp9v" is "Ready"
	I1029 08:22:36.793941    8556 pod_ready.go:86] duration metric: took 400.1364ms for pod "kube-proxy-6gp9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:36.994449    8556 pod_ready.go:83] waiting for pod "kube-scheduler-addons-306574" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:37.394151    8556 pod_ready.go:94] pod "kube-scheduler-addons-306574" is "Ready"
	I1029 08:22:37.394177    8556 pod_ready.go:86] duration metric: took 399.695054ms for pod "kube-scheduler-addons-306574" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:37.394188    8556 pod_ready.go:40] duration metric: took 1.60402213s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 08:22:37.438940    8556 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 08:22:37.441759    8556 out.go:179] * Done! kubectl is now configured to use "addons-306574" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.090729931Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-4cfsd/POD" id=e900e7fd-2ac5-4068-a5c9-891f6394d61c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.09083029Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.098828428Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-4cfsd Namespace:default ID:c56791f34c2e7b597c1ab520941a1678bd78ecd9ac576a72f612f19a5ac2d4c8 UID:566b45c9-77c9-4eed-9bc4-76d901c902a0 NetNS:/var/run/netns/8665462c-e5a5-4d29-a77a-4e4b0fef418b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000f84790}] Aliases:map[]}"
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.098863753Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-4cfsd to CNI network \"kindnet\" (type=ptp)"
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.109867442Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-4cfsd Namespace:default ID:c56791f34c2e7b597c1ab520941a1678bd78ecd9ac576a72f612f19a5ac2d4c8 UID:566b45c9-77c9-4eed-9bc4-76d901c902a0 NetNS:/var/run/netns/8665462c-e5a5-4d29-a77a-4e4b0fef418b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000f84790}] Aliases:map[]}"
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.110054636Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-4cfsd for CNI network kindnet (type=ptp)"
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.111042341Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.11217303Z" level=info msg="Ran pod sandbox c56791f34c2e7b597c1ab520941a1678bd78ecd9ac576a72f612f19a5ac2d4c8 with infra container: default/hello-world-app-5d498dc89-4cfsd/POD" id=e900e7fd-2ac5-4068-a5c9-891f6394d61c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.113468089Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=81d353c9-2808-4722-aab0-814a63f320b0 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.113611869Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=81d353c9-2808-4722-aab0-814a63f320b0 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.113663494Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=81d353c9-2808-4722-aab0-814a63f320b0 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.114266836Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=9087f131-4a87-4628-8306-f3141fba4874 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.119586924Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.888903043Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=9087f131-4a87-4628-8306-f3141fba4874 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.88944382Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c7946816-251f-4920-a820-6a21650682d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.890792369Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c8709c33-411d-447d-a36f-587407a747f7 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.894547863Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-4cfsd/hello-world-app" id=e5bd4287-dbb3-42ea-af3b-2d557bcee1be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.894693205Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.900480217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.90064Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/666dbcb5b8e5efe387083b7d95ce626a5fd0a1bc2a6b18e8c096cda3c65944a6/merged/etc/passwd: no such file or directory"
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.90066448Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/666dbcb5b8e5efe387083b7d95ce626a5fd0a1bc2a6b18e8c096cda3c65944a6/merged/etc/group: no such file or directory"
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.900872961Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.936731377Z" level=info msg="Created container 387082969a96b25763f233059d6f17c689e6285ae20e09c9926a1678afd9d0f4: default/hello-world-app-5d498dc89-4cfsd/hello-world-app" id=e5bd4287-dbb3-42ea-af3b-2d557bcee1be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.937431932Z" level=info msg="Starting container: 387082969a96b25763f233059d6f17c689e6285ae20e09c9926a1678afd9d0f4" id=33d6320a-515f-4033-a4d4-cdfab242c2af name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 08:25:17 addons-306574 crio[777]: time="2025-10-29T08:25:17.939485002Z" level=info msg="Started container" PID=9987 containerID=387082969a96b25763f233059d6f17c689e6285ae20e09c9926a1678afd9d0f4 description=default/hello-world-app-5d498dc89-4cfsd/hello-world-app id=33d6320a-515f-4033-a4d4-cdfab242c2af name=/runtime.v1.RuntimeService/StartContainer sandboxID=c56791f34c2e7b597c1ab520941a1678bd78ecd9ac576a72f612f19a5ac2d4c8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	387082969a96b       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   c56791f34c2e7       hello-world-app-5d498dc89-4cfsd             default
	e6870790b81f8       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   4d7c6e821a089       registry-creds-764b6fb674-s8s7q             kube-system
	53e1fd3de17d2       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   cf32ba1571da3       nginx                                       default
	80233b40a1070       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   952c556445d28       busybox                                     default
	d99432f91672d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago            Running             csi-snapshotter                          0                   0371783fc6b8f       csi-hostpathplugin-jqbm2                    kube-system
	9b8691b1023f8       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago            Running             csi-provisioner                          0                   0371783fc6b8f       csi-hostpathplugin-jqbm2                    kube-system
	e573e53bb23e0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago            Running             liveness-probe                           0                   0371783fc6b8f       csi-hostpathplugin-jqbm2                    kube-system
	e8d80d0af78a6       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago            Running             hostpath                                 0                   0371783fc6b8f       csi-hostpathplugin-jqbm2                    kube-system
	2db472537b2f6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago            Running             gadget                                   0                   1495b3f498d32       gadget-k5cgq                                gadget
	2e36d72e127f6       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago            Running             node-driver-registrar                    0                   0371783fc6b8f       csi-hostpathplugin-jqbm2                    kube-system
	5e8c5db172939       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago            Running             controller                               0                   c65f2d2f92b6f       ingress-nginx-controller-675c5ddd98-slzzq   ingress-nginx
	baf6f3b37987b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago            Running             gcp-auth                                 0                   e037f9417ff82       gcp-auth-78565c9fb4-psjtf                   gcp-auth
	0c6b816415341       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago            Running             registry-proxy                           0                   e9d65f493d6ea       registry-proxy-b9mf9                        kube-system
	65705ac3c758b       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   6e264716c8898       nvidia-device-plugin-daemonset-fm5xc        kube-system
	1fe5195ca5ae1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   0371783fc6b8f       csi-hostpathplugin-jqbm2                    kube-system
	5371c6256fe4e       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   c0c74f57ce43c       amd-gpu-device-plugin-f4ngl                 kube-system
	9b0733b1c46f1       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   6198fa49cd369       metrics-server-85b7d694d7-nsm7j             kube-system
	197632c3e4940       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   b65dcde210b42       csi-hostpath-resizer-0                      kube-system
	59926386ecab3       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   386f1d0a8770b       csi-hostpath-attacher-0                     kube-system
	570224acd5072       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   6ae14b02c8958       snapshot-controller-7d9fbc56b8-v4lqk        kube-system
	6d99cf55a4a67       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   961cc63c584a0       local-path-provisioner-648f6765c9-whpv4     local-path-storage
	217f45f262a57       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   9b4c386fc4a26       snapshot-controller-7d9fbc56b8-2phmj        kube-system
	f20d80bdb5eda       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              patch                                    0                   abf9ae23a16bc       ingress-nginx-admission-patch-fgbht         ingress-nginx
	f05088385cca1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              create                                   0                   6875e255270d5       ingress-nginx-admission-create-5tdvz        ingress-nginx
	1e97c2959256c       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   1ec8189deaa7b       yakd-dashboard-5ff678cb9-njrr5              yakd-dashboard
	6a10f82f1439a       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   5379380c31f12       registry-6b586f9694-782gg                   kube-system
	ff1e52067a5c8       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   76e8dd4438712       kube-ingress-dns-minikube                   kube-system
	9514f177e9812       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago            Running             cloud-spanner-emulator                   0                   6b4e7daf9b454       cloud-spanner-emulator-86bd5cbb97-wrt96     default
	ea0d3827c6799       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   23a72ec104987       coredns-66bc5c9577-9jrct                    kube-system
	11ad0d3d51574       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   1b02725b41bea       storage-provisioner                         kube-system
	2df32f1d553fc       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   1c528c6626acf       kube-proxy-6gp9v                            kube-system
	a41a72a7acd69       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   132dcf3f412f8       kindnet-nsf4w                               kube-system
	90b2a91e7069c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   d560b0f9607d6       kube-controller-manager-addons-306574       kube-system
	56022b3e8de6c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   65edbdb46e570       kube-scheduler-addons-306574                kube-system
	a2eacbffa27c9       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   5b80e2af7adc2       kube-apiserver-addons-306574                kube-system
	49643bd1cddf5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   96c2ab8ce679a       etcd-addons-306574                          kube-system
	
	
	==> coredns [ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db] <==
	[INFO] 10.244.0.21:43116 - 54475 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.006925587s
	[INFO] 10.244.0.21:54978 - 5250 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005304996s
	[INFO] 10.244.0.21:49781 - 32271 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006143363s
	[INFO] 10.244.0.21:54756 - 59227 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004428992s
	[INFO] 10.244.0.21:45790 - 60969 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00513392s
	[INFO] 10.244.0.21:34083 - 40780 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001159231s
	[INFO] 10.244.0.21:56546 - 37991 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002058078s
	[INFO] 10.244.0.25:55724 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000247285s
	[INFO] 10.244.0.25:34792 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000189684s
	[INFO] 10.244.0.31:52356 - 9815 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000230594s
	[INFO] 10.244.0.31:43316 - 10059 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000291113s
	[INFO] 10.244.0.31:59106 - 55103 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000109683s
	[INFO] 10.244.0.31:36819 - 23742 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000105455s
	[INFO] 10.244.0.31:34345 - 50621 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000103055s
	[INFO] 10.244.0.31:33567 - 63504 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000107049s
	[INFO] 10.244.0.31:35252 - 47870 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003181281s
	[INFO] 10.244.0.31:50708 - 39314 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003402496s
	[INFO] 10.244.0.31:50968 - 54233 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004153683s
	[INFO] 10.244.0.31:38373 - 51364 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004485495s
	[INFO] 10.244.0.31:57723 - 1552 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004802507s
	[INFO] 10.244.0.31:33816 - 7728 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005031755s
	[INFO] 10.244.0.31:42212 - 3955 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.00489644s
	[INFO] 10.244.0.31:38050 - 60420 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.006860883s
	[INFO] 10.244.0.31:60925 - 41364 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001745465s
	[INFO] 10.244.0.31:33833 - 11394 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001935145s
	
	
	==> describe nodes <==
	Name:               addons-306574
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-306574
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=addons-306574
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T08_20_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-306574
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-306574"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:20:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-306574
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:25:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 08:25:11 +0000   Wed, 29 Oct 2025 08:20:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 08:25:11 +0000   Wed, 29 Oct 2025 08:20:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 08:25:11 +0000   Wed, 29 Oct 2025 08:20:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 08:25:11 +0000   Wed, 29 Oct 2025 08:21:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-306574
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                a1650f9e-db84-4c08-b5e9-8c3a81f4f882
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  default                     cloud-spanner-emulator-86bd5cbb97-wrt96      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  default                     hello-world-app-5d498dc89-4cfsd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-k5cgq                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  gcp-auth                    gcp-auth-78565c9fb4-psjtf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-slzzq    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m24s
	  kube-system                 amd-gpu-device-plugin-f4ngl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 coredns-66bc5c9577-9jrct                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m26s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 csi-hostpathplugin-jqbm2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 etcd-addons-306574                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m32s
	  kube-system                 kindnet-nsf4w                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m27s
	  kube-system                 kube-apiserver-addons-306574                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-controller-manager-addons-306574        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-proxy-6gp9v                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-scheduler-addons-306574                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 metrics-server-85b7d694d7-nsm7j              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m25s
	  kube-system                 nvidia-device-plugin-daemonset-fm5xc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 registry-6b586f9694-782gg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 registry-creds-764b6fb674-s8s7q              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 registry-proxy-b9mf9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 snapshot-controller-7d9fbc56b8-2phmj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 snapshot-controller-7d9fbc56b8-v4lqk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  local-path-storage          local-path-provisioner-648f6765c9-whpv4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-njrr5               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m24s  kube-proxy       
	  Normal  Starting                 4m32s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m32s  kubelet          Node addons-306574 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m32s  kubelet          Node addons-306574 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m32s  kubelet          Node addons-306574 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m27s  node-controller  Node addons-306574 event: Registered Node addons-306574 in Controller
	  Normal  NodeReady                3m45s  kubelet          Node addons-306574 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.101648] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029373] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.989088] kauditd_printk_skb: 47 callbacks suppressed
	[Oct29 08:23] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.056844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.023928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000035] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.023834] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +2.047751] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +4.031591] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +8.063160] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[ +16.382216] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 08:24] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	
	
	==> etcd [49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd] <==
	{"level":"warn","ts":"2025-10-29T08:20:43.761466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.768051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.774023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.780084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.786340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.793035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.799593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.812183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.818882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.825720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.838520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.846281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.855030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.896837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:54.899505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:54.906316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:21.309877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:21.316913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:21.329726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:21.336174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38068","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-29T08:21:41.147494Z","caller":"traceutil/trace.go:172","msg":"trace[1669655330] linearizableReadLoop","detail":"{readStateIndex:987; appliedIndex:987; }","duration":"140.839596ms","start":"2025-10-29T08:21:41.006634Z","end":"2025-10-29T08:21:41.147474Z","steps":["trace[1669655330] 'read index received'  (duration: 140.832801ms)","trace[1669655330] 'applied index is now lower than readState.Index'  (duration: 6.009µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-29T08:21:41.272739Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"266.024603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-29T08:21:41.272805Z","caller":"traceutil/trace.go:172","msg":"trace[1749734003] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:964; }","duration":"266.163223ms","start":"2025-10-29T08:21:41.006626Z","end":"2025-10-29T08:21:41.272789Z","steps":["trace[1749734003] 'agreement among raft nodes before linearized reading'  (duration: 140.955453ms)","trace[1749734003] 'range keys from in-memory index tree'  (duration: 125.035233ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-29T08:21:41.272802Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.196327ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040954387074543 > lease_revoke:<id:70cc9a2f0e118c48>","response":"size:29"}
	{"level":"info","ts":"2025-10-29T08:22:02.121887Z","caller":"traceutil/trace.go:172","msg":"trace[411747514] transaction","detail":"{read_only:false; response_revision:1141; number_of_response:1; }","duration":"118.9753ms","start":"2025-10-29T08:22:02.002890Z","end":"2025-10-29T08:22:02.121866Z","steps":["trace[411747514] 'process raft request'  (duration: 107.556469ms)","trace[411747514] 'compare'  (duration: 11.333407ms)"],"step_count":2}
	
	
	==> gcp-auth [baf6f3b37987bfaa110b830d8c52aba3e7d09991703da7372971a14f5c58efef] <==
	2025/10/29 08:22:00 GCP Auth Webhook started!
	2025/10/29 08:22:37 Ready to marshal response ...
	2025/10/29 08:22:37 Ready to write response ...
	2025/10/29 08:22:37 Ready to marshal response ...
	2025/10/29 08:22:37 Ready to write response ...
	2025/10/29 08:22:38 Ready to marshal response ...
	2025/10/29 08:22:38 Ready to write response ...
	2025/10/29 08:22:53 Ready to marshal response ...
	2025/10/29 08:22:53 Ready to write response ...
	2025/10/29 08:22:55 Ready to marshal response ...
	2025/10/29 08:22:55 Ready to write response ...
	2025/10/29 08:22:59 Ready to marshal response ...
	2025/10/29 08:22:59 Ready to write response ...
	2025/10/29 08:22:59 Ready to marshal response ...
	2025/10/29 08:22:59 Ready to write response ...
	2025/10/29 08:22:59 Ready to marshal response ...
	2025/10/29 08:22:59 Ready to write response ...
	2025/10/29 08:23:08 Ready to marshal response ...
	2025/10/29 08:23:08 Ready to write response ...
	2025/10/29 08:23:31 Ready to marshal response ...
	2025/10/29 08:23:31 Ready to write response ...
	2025/10/29 08:25:16 Ready to marshal response ...
	2025/10/29 08:25:16 Ready to write response ...
	
	
	==> kernel <==
	 08:25:18 up 7 min,  0 user,  load average: 0.57, 0.79, 0.43
	Linux addons-306574 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110] <==
	I1029 08:23:13.227362       1 main.go:301] handling current node
	I1029 08:23:23.229842       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:23:23.229877       1 main.go:301] handling current node
	I1029 08:23:33.227807       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:23:33.227851       1 main.go:301] handling current node
	I1029 08:23:43.227818       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:23:43.227850       1 main.go:301] handling current node
	I1029 08:23:53.226764       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:23:53.226796       1 main.go:301] handling current node
	I1029 08:24:03.227195       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:24:03.227238       1 main.go:301] handling current node
	I1029 08:24:13.226758       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:24:13.226789       1 main.go:301] handling current node
	I1029 08:24:23.228376       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:24:23.228432       1 main.go:301] handling current node
	I1029 08:24:33.227643       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:24:33.227697       1 main.go:301] handling current node
	I1029 08:24:43.227726       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:24:43.227755       1 main.go:301] handling current node
	I1029 08:24:53.227080       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:24:53.227109       1 main.go:301] handling current node
	I1029 08:25:03.227663       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:25:03.227701       1 main.go:301] handling current node
	I1029 08:25:13.227135       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:25:13.227174       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1] <==
	W1029 08:21:21.329651       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1029 08:21:21.336106       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1029 08:21:33.511678       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.2.28:443: connect: connection refused
	E1029 08:21:33.511720       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.2.28:443: connect: connection refused" logger="UnhandledError"
	W1029 08:21:33.511726       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.2.28:443: connect: connection refused
	E1029 08:21:33.511753       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.2.28:443: connect: connection refused" logger="UnhandledError"
	W1029 08:21:33.532970       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.2.28:443: connect: connection refused
	E1029 08:21:33.533218       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.2.28:443: connect: connection refused" logger="UnhandledError"
	W1029 08:21:33.536109       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.2.28:443: connect: connection refused
	E1029 08:21:33.536144       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.2.28:443: connect: connection refused" logger="UnhandledError"
	W1029 08:21:52.974618       1 handler_proxy.go:99] no RequestInfo found in the context
	E1029 08:21:52.974693       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1029 08:21:52.974696       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.42.12:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.42.12:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.42.12:443: connect: connection refused" logger="UnhandledError"
	E1029 08:21:52.976315       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.42.12:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.42.12:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.42.12:443: connect: connection refused" logger="UnhandledError"
	E1029 08:21:52.982055       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.42.12:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.42.12:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.42.12:443: connect: connection refused" logger="UnhandledError"
	I1029 08:21:53.034657       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1029 08:22:45.131047       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39020: use of closed network connection
	E1029 08:22:45.281097       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39044: use of closed network connection
	I1029 08:22:53.657059       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1029 08:22:53.863476       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.54.42"}
	I1029 08:23:09.174428       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1029 08:25:16.855142       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.83.174"}
	
	
	==> kube-controller-manager [90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67] <==
	I1029 08:20:51.290215       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 08:20:51.290215       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 08:20:51.290316       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 08:20:51.290464       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 08:20:51.290560       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 08:20:51.290576       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 08:20:51.290798       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 08:20:51.291151       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 08:20:51.291264       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 08:20:51.293174       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 08:20:51.295349       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 08:20:51.299527       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 08:20:51.302766       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1029 08:20:51.309051       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1029 08:20:53.702046       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1029 08:21:21.304142       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1029 08:21:21.304265       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1029 08:21:21.304309       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1029 08:21:21.317841       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1029 08:21:21.323946       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1029 08:21:21.405349       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 08:21:21.424548       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 08:21:36.248091       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1029 08:21:51.411158       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1029 08:21:51.433364       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326] <==
	I1029 08:20:52.753094       1 server_linux.go:53] "Using iptables proxy"
	I1029 08:20:52.989968       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 08:20:53.090232       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 08:20:53.093088       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1029 08:20:53.093978       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 08:20:53.295808       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 08:20:53.308793       1 server_linux.go:132] "Using iptables Proxier"
	I1029 08:20:53.469652       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 08:20:53.485405       1 server.go:527] "Version info" version="v1.34.1"
	I1029 08:20:53.485542       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:20:53.488047       1 config.go:200] "Starting service config controller"
	I1029 08:20:53.488126       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 08:20:53.488184       1 config.go:106] "Starting endpoint slice config controller"
	I1029 08:20:53.488210       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 08:20:53.488314       1 config.go:309] "Starting node config controller"
	I1029 08:20:53.488348       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 08:20:53.488373       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 08:20:53.488688       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 08:20:53.488758       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 08:20:53.589406       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 08:20:53.589472       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 08:20:53.593100       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3] <==
	E1029 08:20:44.314716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 08:20:44.314773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 08:20:44.314829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 08:20:44.314883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 08:20:44.315214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 08:20:44.315364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 08:20:44.315428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 08:20:44.315544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 08:20:44.315615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 08:20:44.315686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 08:20:44.315709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 08:20:44.315738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 08:20:44.315615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 08:20:44.315829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 08:20:44.315844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 08:20:45.174042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 08:20:45.217845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 08:20:45.240104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 08:20:45.282210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 08:20:45.282215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 08:20:45.394980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 08:20:45.482212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 08:20:45.486264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 08:20:45.511641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1029 08:20:45.911223       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 08:23:36 addons-306574 kubelet[1303]: E1029 08:23:36.553434    1303 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-s8s7q" podUID="1041cb13-458e-46e5-8f69-a740c85ba5df"
	Oct 29 08:23:39 addons-306574 kubelet[1303]: I1029 08:23:39.082431    1303 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/63f59069-a6ee-4c87-82e5-74bf410bfe36-gcp-creds\") pod \"63f59069-a6ee-4c87-82e5-74bf410bfe36\" (UID: \"63f59069-a6ee-4c87-82e5-74bf410bfe36\") "
	Oct 29 08:23:39 addons-306574 kubelet[1303]: I1029 08:23:39.082583    1303 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63f59069-a6ee-4c87-82e5-74bf410bfe36-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "63f59069-a6ee-4c87-82e5-74bf410bfe36" (UID: "63f59069-a6ee-4c87-82e5-74bf410bfe36"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 29 08:23:39 addons-306574 kubelet[1303]: I1029 08:23:39.082631    1303 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^8dd38c85-b4a0-11f0-9c99-66d782f3a193\") pod \"63f59069-a6ee-4c87-82e5-74bf410bfe36\" (UID: \"63f59069-a6ee-4c87-82e5-74bf410bfe36\") "
	Oct 29 08:23:39 addons-306574 kubelet[1303]: I1029 08:23:39.082702    1303 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qljm9\" (UniqueName: \"kubernetes.io/projected/63f59069-a6ee-4c87-82e5-74bf410bfe36-kube-api-access-qljm9\") pod \"63f59069-a6ee-4c87-82e5-74bf410bfe36\" (UID: \"63f59069-a6ee-4c87-82e5-74bf410bfe36\") "
	Oct 29 08:23:39 addons-306574 kubelet[1303]: I1029 08:23:39.083016    1303 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/63f59069-a6ee-4c87-82e5-74bf410bfe36-gcp-creds\") on node \"addons-306574\" DevicePath \"\""
	Oct 29 08:23:39 addons-306574 kubelet[1303]: I1029 08:23:39.085111    1303 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f59069-a6ee-4c87-82e5-74bf410bfe36-kube-api-access-qljm9" (OuterVolumeSpecName: "kube-api-access-qljm9") pod "63f59069-a6ee-4c87-82e5-74bf410bfe36" (UID: "63f59069-a6ee-4c87-82e5-74bf410bfe36"). InnerVolumeSpecName "kube-api-access-qljm9". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 29 08:23:39 addons-306574 kubelet[1303]: I1029 08:23:39.086540    1303 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^8dd38c85-b4a0-11f0-9c99-66d782f3a193" (OuterVolumeSpecName: "task-pv-storage") pod "63f59069-a6ee-4c87-82e5-74bf410bfe36" (UID: "63f59069-a6ee-4c87-82e5-74bf410bfe36"). InnerVolumeSpecName "pvc-3e5a67f0-c28f-4ff2-a0ba-2d70f433a64e". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 29 08:23:39 addons-306574 kubelet[1303]: I1029 08:23:39.183965    1303 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qljm9\" (UniqueName: \"kubernetes.io/projected/63f59069-a6ee-4c87-82e5-74bf410bfe36-kube-api-access-qljm9\") on node \"addons-306574\" DevicePath \"\""
	Oct 29 08:23:39 addons-306574 kubelet[1303]: I1029 08:23:39.184040    1303 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-3e5a67f0-c28f-4ff2-a0ba-2d70f433a64e\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^8dd38c85-b4a0-11f0-9c99-66d782f3a193\") on node \"addons-306574\" "
	Oct 29 08:23:39 addons-306574 kubelet[1303]: I1029 08:23:39.188641    1303 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-3e5a67f0-c28f-4ff2-a0ba-2d70f433a64e" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^8dd38c85-b4a0-11f0-9c99-66d782f3a193") on node "addons-306574"
	Oct 29 08:23:39 addons-306574 kubelet[1303]: I1029 08:23:39.284856    1303 reconciler_common.go:299] "Volume detached for volume \"pvc-3e5a67f0-c28f-4ff2-a0ba-2d70f433a64e\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^8dd38c85-b4a0-11f0-9c99-66d782f3a193\") on node \"addons-306574\" DevicePath \"\""
	Oct 29 08:23:39 addons-306574 kubelet[1303]: I1029 08:23:39.410753    1303 scope.go:117] "RemoveContainer" containerID="d1bdaae88d8511aa7172cf71c90cca7e29146df77058a19e2bec610c477bd31d"
	Oct 29 08:23:39 addons-306574 kubelet[1303]: I1029 08:23:39.421304    1303 scope.go:117] "RemoveContainer" containerID="d1bdaae88d8511aa7172cf71c90cca7e29146df77058a19e2bec610c477bd31d"
	Oct 29 08:23:39 addons-306574 kubelet[1303]: E1029 08:23:39.421780    1303 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1bdaae88d8511aa7172cf71c90cca7e29146df77058a19e2bec610c477bd31d\": container with ID starting with d1bdaae88d8511aa7172cf71c90cca7e29146df77058a19e2bec610c477bd31d not found: ID does not exist" containerID="d1bdaae88d8511aa7172cf71c90cca7e29146df77058a19e2bec610c477bd31d"
	Oct 29 08:23:39 addons-306574 kubelet[1303]: I1029 08:23:39.421829    1303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1bdaae88d8511aa7172cf71c90cca7e29146df77058a19e2bec610c477bd31d"} err="failed to get container status \"d1bdaae88d8511aa7172cf71c90cca7e29146df77058a19e2bec610c477bd31d\": rpc error: code = NotFound desc = could not find container \"d1bdaae88d8511aa7172cf71c90cca7e29146df77058a19e2bec610c477bd31d\": container with ID starting with d1bdaae88d8511aa7172cf71c90cca7e29146df77058a19e2bec610c477bd31d not found: ID does not exist"
	Oct 29 08:23:40 addons-306574 kubelet[1303]: I1029 08:23:40.707519    1303 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63f59069-a6ee-4c87-82e5-74bf410bfe36" path="/var/lib/kubelet/pods/63f59069-a6ee-4c87-82e5-74bf410bfe36/volumes"
	Oct 29 08:23:46 addons-306574 kubelet[1303]: I1029 08:23:46.724870    1303 scope.go:117] "RemoveContainer" containerID="c7c00e54e3b607f940cafdd8f1b47d6df50e04249ab3f937818c8bfe7c6f4597"
	Oct 29 08:23:46 addons-306574 kubelet[1303]: I1029 08:23:46.733412    1303 scope.go:117] "RemoveContainer" containerID="0ef3851634d79963d94ec990c9a67823419d44733077ea8c2fb59ae0d6bed795"
	Oct 29 08:23:50 addons-306574 kubelet[1303]: I1029 08:23:50.473452    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-s8s7q" podStartSLOduration=176.423243462 podStartE2EDuration="2m57.473431258s" podCreationTimestamp="2025-10-29 08:20:53 +0000 UTC" firstStartedPulling="2025-10-29 08:23:48.728364909 +0000 UTC m=+182.111982789" lastFinishedPulling="2025-10-29 08:23:49.778552706 +0000 UTC m=+183.162170585" observedRunningTime="2025-10-29 08:23:50.473265075 +0000 UTC m=+183.856882974" watchObservedRunningTime="2025-10-29 08:23:50.473431258 +0000 UTC m=+183.857049158"
	Oct 29 08:24:25 addons-306574 kubelet[1303]: I1029 08:24:25.704721    1303 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-f4ngl" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:24:30 addons-306574 kubelet[1303]: I1029 08:24:30.705821    1303 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-b9mf9" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:24:42 addons-306574 kubelet[1303]: I1029 08:24:42.704675    1303 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-fm5xc" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:25:16 addons-306574 kubelet[1303]: I1029 08:25:16.854814    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/566b45c9-77c9-4eed-9bc4-76d901c902a0-gcp-creds\") pod \"hello-world-app-5d498dc89-4cfsd\" (UID: \"566b45c9-77c9-4eed-9bc4-76d901c902a0\") " pod="default/hello-world-app-5d498dc89-4cfsd"
	Oct 29 08:25:16 addons-306574 kubelet[1303]: I1029 08:25:16.854876    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjd8s\" (UniqueName: \"kubernetes.io/projected/566b45c9-77c9-4eed-9bc4-76d901c902a0-kube-api-access-bjd8s\") pod \"hello-world-app-5d498dc89-4cfsd\" (UID: \"566b45c9-77c9-4eed-9bc4-76d901c902a0\") " pod="default/hello-world-app-5d498dc89-4cfsd"
	
	
	==> storage-provisioner [11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3] <==
	W1029 08:24:53.018583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:24:55.021637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:24:55.025123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:24:57.027926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:24:57.032615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:24:59.035468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:24:59.039076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:01.041962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:01.045739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:03.048487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:03.051759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:05.054296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:05.057698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:07.060333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:07.064043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:09.066628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:09.071831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:11.074920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:11.078504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:13.081814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:13.085674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:15.088914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:15.093800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:17.096714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:25:17.100258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-306574 -n addons-306574
helpers_test.go:269: (dbg) Run:  kubectl --context addons-306574 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-5tdvz ingress-nginx-admission-patch-fgbht
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-306574 describe pod ingress-nginx-admission-create-5tdvz ingress-nginx-admission-patch-fgbht
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-306574 describe pod ingress-nginx-admission-create-5tdvz ingress-nginx-admission-patch-fgbht: exit status 1 (54.870299ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5tdvz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fgbht" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-306574 describe pod ingress-nginx-admission-create-5tdvz ingress-nginx-admission-patch-fgbht: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306574 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (238.925096ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:25:19.384443   23828 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:25:19.384731   23828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:25:19.384741   23828 out.go:374] Setting ErrFile to fd 2...
	I1029 08:25:19.384745   23828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:25:19.384950   23828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:25:19.385211   23828 mustload.go:66] Loading cluster: addons-306574
	I1029 08:25:19.385513   23828 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:25:19.385525   23828 addons.go:607] checking whether the cluster is paused
	I1029 08:25:19.385601   23828 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:25:19.385615   23828 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:25:19.385977   23828 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:25:19.403101   23828 ssh_runner.go:195] Run: systemctl --version
	I1029 08:25:19.403143   23828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:25:19.419953   23828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:25:19.517501   23828 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:25:19.517624   23828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:25:19.546077   23828 cri.go:89] found id: "e6870790b81f8a3af3db6f11e2025e9802d96e6ee6c990983bb3f8bf7cabb1b2"
	I1029 08:25:19.546098   23828 cri.go:89] found id: "d99432f91672d8c1d47f840f261c261a51f13955e2e017761d646d32c313587e"
	I1029 08:25:19.546102   23828 cri.go:89] found id: "9b8691b1023f8ec0251c9d8bc6b0b7d0fd5fea3ab2e89770788c518fb6ca60da"
	I1029 08:25:19.546105   23828 cri.go:89] found id: "e573e53bb23e05e993a3f287fd82b620791eb65b7ea5b1a15f165ae62002efa6"
	I1029 08:25:19.546107   23828 cri.go:89] found id: "e8d80d0af78a66b1efd4b4234f27af0b707a55e687da42d583a0ccac1dbc984f"
	I1029 08:25:19.546110   23828 cri.go:89] found id: "2e36d72e127f635b90fd8546ab02d09ce6d8f7f6b4a95e36f8f801a783fecf9d"
	I1029 08:25:19.546113   23828 cri.go:89] found id: "0c6b81641534122ae540f448d067dbd84061141d3143381b755c3d7f7ca5a3ab"
	I1029 08:25:19.546115   23828 cri.go:89] found id: "65705ac3c758b666b1f55a5f8f6d8e4d57b32abe7b2d77235aee8212d4b5d20f"
	I1029 08:25:19.546117   23828 cri.go:89] found id: "1fe5195ca5ae1f93eaf51b4e0526a47a8bb3bfa8e424a07fb66a6a9f81e65eb9"
	I1029 08:25:19.546122   23828 cri.go:89] found id: "5371c6256fe4ea45e90d7c3585d7bb2374128bb029b65fdcf3fdd6c3cb545117"
	I1029 08:25:19.546125   23828 cri.go:89] found id: "9b0733b1c46f199694daba309c64792dced18ab0f2b83341c9bb8a464c908faa"
	I1029 08:25:19.546127   23828 cri.go:89] found id: "197632c3e4940a86b33c2118188a5166f3fe06ff6190898cb1f2df1fbe028d75"
	I1029 08:25:19.546130   23828 cri.go:89] found id: "59926386ecab3854c03a360c1116f1d37b0a73222f47ac804c45f9d2d657b8e4"
	I1029 08:25:19.546150   23828 cri.go:89] found id: "570224acd5072d8b3a0600cf1fcca347b6bbac8f4c411782d6c8460dce94d302"
	I1029 08:25:19.546158   23828 cri.go:89] found id: "217f45f262a57c7196f21057bc6190abc33ede848212337a47b452ce93dae999"
	I1029 08:25:19.546164   23828 cri.go:89] found id: "6a10f82f1439a50fe59f05dd964f098ffac531c82cea62036993200facf7e0e6"
	I1029 08:25:19.546171   23828 cri.go:89] found id: "ff1e52067a5c8509cce80abc4079064ad91518967401e8a972d0957857b94894"
	I1029 08:25:19.546176   23828 cri.go:89] found id: "ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db"
	I1029 08:25:19.546180   23828 cri.go:89] found id: "11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3"
	I1029 08:25:19.546184   23828 cri.go:89] found id: "2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326"
	I1029 08:25:19.546188   23828 cri.go:89] found id: "a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110"
	I1029 08:25:19.546192   23828 cri.go:89] found id: "90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67"
	I1029 08:25:19.546196   23828 cri.go:89] found id: "56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3"
	I1029 08:25:19.546201   23828 cri.go:89] found id: "a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1"
	I1029 08:25:19.546205   23828 cri.go:89] found id: "49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd"
	I1029 08:25:19.546208   23828 cri.go:89] found id: ""
	I1029 08:25:19.546245   23828 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:25:19.560101   23828 out.go:203] 
	W1029 08:25:19.561315   23828 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:25:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:25:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:25:19.561343   23828 out.go:285] * 
	* 
	W1029 08:25:19.564351   23828 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:25:19.565431   23828 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-306574 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306574 addons disable ingress --alsologtostderr -v=1: exit status 11 (242.129504ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:25:19.623921   23889 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:25:19.624194   23889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:25:19.624205   23889 out.go:374] Setting ErrFile to fd 2...
	I1029 08:25:19.624209   23889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:25:19.624391   23889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:25:19.624677   23889 mustload.go:66] Loading cluster: addons-306574
	I1029 08:25:19.625010   23889 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:25:19.625029   23889 addons.go:607] checking whether the cluster is paused
	I1029 08:25:19.625110   23889 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:25:19.625125   23889 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:25:19.625483   23889 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:25:19.642708   23889 ssh_runner.go:195] Run: systemctl --version
	I1029 08:25:19.642763   23889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:25:19.660088   23889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:25:19.758506   23889 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:25:19.758609   23889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:25:19.788402   23889 cri.go:89] found id: "e6870790b81f8a3af3db6f11e2025e9802d96e6ee6c990983bb3f8bf7cabb1b2"
	I1029 08:25:19.788429   23889 cri.go:89] found id: "d99432f91672d8c1d47f840f261c261a51f13955e2e017761d646d32c313587e"
	I1029 08:25:19.788436   23889 cri.go:89] found id: "9b8691b1023f8ec0251c9d8bc6b0b7d0fd5fea3ab2e89770788c518fb6ca60da"
	I1029 08:25:19.788441   23889 cri.go:89] found id: "e573e53bb23e05e993a3f287fd82b620791eb65b7ea5b1a15f165ae62002efa6"
	I1029 08:25:19.788445   23889 cri.go:89] found id: "e8d80d0af78a66b1efd4b4234f27af0b707a55e687da42d583a0ccac1dbc984f"
	I1029 08:25:19.788452   23889 cri.go:89] found id: "2e36d72e127f635b90fd8546ab02d09ce6d8f7f6b4a95e36f8f801a783fecf9d"
	I1029 08:25:19.788457   23889 cri.go:89] found id: "0c6b81641534122ae540f448d067dbd84061141d3143381b755c3d7f7ca5a3ab"
	I1029 08:25:19.788461   23889 cri.go:89] found id: "65705ac3c758b666b1f55a5f8f6d8e4d57b32abe7b2d77235aee8212d4b5d20f"
	I1029 08:25:19.788465   23889 cri.go:89] found id: "1fe5195ca5ae1f93eaf51b4e0526a47a8bb3bfa8e424a07fb66a6a9f81e65eb9"
	I1029 08:25:19.788479   23889 cri.go:89] found id: "5371c6256fe4ea45e90d7c3585d7bb2374128bb029b65fdcf3fdd6c3cb545117"
	I1029 08:25:19.788482   23889 cri.go:89] found id: "9b0733b1c46f199694daba309c64792dced18ab0f2b83341c9bb8a464c908faa"
	I1029 08:25:19.788485   23889 cri.go:89] found id: "197632c3e4940a86b33c2118188a5166f3fe06ff6190898cb1f2df1fbe028d75"
	I1029 08:25:19.788488   23889 cri.go:89] found id: "59926386ecab3854c03a360c1116f1d37b0a73222f47ac804c45f9d2d657b8e4"
	I1029 08:25:19.788490   23889 cri.go:89] found id: "570224acd5072d8b3a0600cf1fcca347b6bbac8f4c411782d6c8460dce94d302"
	I1029 08:25:19.788493   23889 cri.go:89] found id: "217f45f262a57c7196f21057bc6190abc33ede848212337a47b452ce93dae999"
	I1029 08:25:19.788505   23889 cri.go:89] found id: "6a10f82f1439a50fe59f05dd964f098ffac531c82cea62036993200facf7e0e6"
	I1029 08:25:19.788510   23889 cri.go:89] found id: "ff1e52067a5c8509cce80abc4079064ad91518967401e8a972d0957857b94894"
	I1029 08:25:19.788514   23889 cri.go:89] found id: "ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db"
	I1029 08:25:19.788517   23889 cri.go:89] found id: "11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3"
	I1029 08:25:19.788519   23889 cri.go:89] found id: "2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326"
	I1029 08:25:19.788522   23889 cri.go:89] found id: "a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110"
	I1029 08:25:19.788524   23889 cri.go:89] found id: "90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67"
	I1029 08:25:19.788527   23889 cri.go:89] found id: "56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3"
	I1029 08:25:19.788529   23889 cri.go:89] found id: "a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1"
	I1029 08:25:19.788531   23889 cri.go:89] found id: "49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd"
	I1029 08:25:19.788539   23889 cri.go:89] found id: ""
	I1029 08:25:19.788579   23889 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:25:19.802532   23889 out.go:203] 
	W1029 08:25:19.803634   23889 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:25:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:25:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:25:19.803662   23889 out.go:285] * 
	* 
	W1029 08:25:19.807312   23889 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:25:19.808400   23889 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-306574 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.40s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-k5cgq" [9a353ac0-a61b-479e-b49c-1d26a37e2f91] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00402352s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306574 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (298.702747ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:23:01.207117   20089 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:23:01.207503   20089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:01.207521   20089 out.go:374] Setting ErrFile to fd 2...
	I1029 08:23:01.207527   20089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:01.207862   20089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:23:01.208265   20089 mustload.go:66] Loading cluster: addons-306574
	I1029 08:23:01.208757   20089 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:01.208778   20089 addons.go:607] checking whether the cluster is paused
	I1029 08:23:01.208917   20089 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:01.208943   20089 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:23:01.209503   20089 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:23:01.235941   20089 ssh_runner.go:195] Run: systemctl --version
	I1029 08:23:01.236027   20089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:23:01.258157   20089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:23:01.365688   20089 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:23:01.365797   20089 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:23:01.400682   20089 cri.go:89] found id: "d99432f91672d8c1d47f840f261c261a51f13955e2e017761d646d32c313587e"
	I1029 08:23:01.400711   20089 cri.go:89] found id: "9b8691b1023f8ec0251c9d8bc6b0b7d0fd5fea3ab2e89770788c518fb6ca60da"
	I1029 08:23:01.400717   20089 cri.go:89] found id: "e573e53bb23e05e993a3f287fd82b620791eb65b7ea5b1a15f165ae62002efa6"
	I1029 08:23:01.400722   20089 cri.go:89] found id: "e8d80d0af78a66b1efd4b4234f27af0b707a55e687da42d583a0ccac1dbc984f"
	I1029 08:23:01.400726   20089 cri.go:89] found id: "2e36d72e127f635b90fd8546ab02d09ce6d8f7f6b4a95e36f8f801a783fecf9d"
	I1029 08:23:01.400730   20089 cri.go:89] found id: "0c6b81641534122ae540f448d067dbd84061141d3143381b755c3d7f7ca5a3ab"
	I1029 08:23:01.400734   20089 cri.go:89] found id: "65705ac3c758b666b1f55a5f8f6d8e4d57b32abe7b2d77235aee8212d4b5d20f"
	I1029 08:23:01.400738   20089 cri.go:89] found id: "1fe5195ca5ae1f93eaf51b4e0526a47a8bb3bfa8e424a07fb66a6a9f81e65eb9"
	I1029 08:23:01.400742   20089 cri.go:89] found id: "5371c6256fe4ea45e90d7c3585d7bb2374128bb029b65fdcf3fdd6c3cb545117"
	I1029 08:23:01.400748   20089 cri.go:89] found id: "9b0733b1c46f199694daba309c64792dced18ab0f2b83341c9bb8a464c908faa"
	I1029 08:23:01.400753   20089 cri.go:89] found id: "197632c3e4940a86b33c2118188a5166f3fe06ff6190898cb1f2df1fbe028d75"
	I1029 08:23:01.400757   20089 cri.go:89] found id: "59926386ecab3854c03a360c1116f1d37b0a73222f47ac804c45f9d2d657b8e4"
	I1029 08:23:01.400760   20089 cri.go:89] found id: "570224acd5072d8b3a0600cf1fcca347b6bbac8f4c411782d6c8460dce94d302"
	I1029 08:23:01.400764   20089 cri.go:89] found id: "217f45f262a57c7196f21057bc6190abc33ede848212337a47b452ce93dae999"
	I1029 08:23:01.400768   20089 cri.go:89] found id: "6a10f82f1439a50fe59f05dd964f098ffac531c82cea62036993200facf7e0e6"
	I1029 08:23:01.400773   20089 cri.go:89] found id: "ff1e52067a5c8509cce80abc4079064ad91518967401e8a972d0957857b94894"
	I1029 08:23:01.400778   20089 cri.go:89] found id: "ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db"
	I1029 08:23:01.400784   20089 cri.go:89] found id: "11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3"
	I1029 08:23:01.400788   20089 cri.go:89] found id: "2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326"
	I1029 08:23:01.400792   20089 cri.go:89] found id: "a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110"
	I1029 08:23:01.400796   20089 cri.go:89] found id: "90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67"
	I1029 08:23:01.400800   20089 cri.go:89] found id: "56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3"
	I1029 08:23:01.400804   20089 cri.go:89] found id: "a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1"
	I1029 08:23:01.400808   20089 cri.go:89] found id: "49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd"
	I1029 08:23:01.400811   20089 cri.go:89] found id: ""
	I1029 08:23:01.400855   20089 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:23:01.421286   20089 out.go:203] 
	W1029 08:23:01.422740   20089 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:23:01.422760   20089 out.go:285] * 
	* 
	W1029 08:23:01.427752   20089 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:23:01.429412   20089 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-306574 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.722464ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I1029 08:22:50.803799    7218 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-nsm7j" [d5e58e21-27a8-443a-87dd-b092fa4d1169] Running
I1029 08:22:50.806954    7218 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1029 08:22:50.806977    7218 kapi.go:107] duration metric: took 3.199142ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003378713s
addons_test.go:463: (dbg) Run:  kubectl --context addons-306574 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306574 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (264.547659ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:22:55.926618   19205 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:22:55.926912   19205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:55.926922   19205 out.go:374] Setting ErrFile to fd 2...
	I1029 08:22:55.926927   19205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:55.927157   19205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:22:55.927424   19205 mustload.go:66] Loading cluster: addons-306574
	I1029 08:22:55.927819   19205 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:55.927851   19205 addons.go:607] checking whether the cluster is paused
	I1029 08:22:55.927970   19205 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:55.928008   19205 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:22:55.928467   19205 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:22:55.951744   19205 ssh_runner.go:195] Run: systemctl --version
	I1029 08:22:55.951886   19205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:22:55.974455   19205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:22:56.075810   19205 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:22:56.075879   19205 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:22:56.105443   19205 cri.go:89] found id: "d99432f91672d8c1d47f840f261c261a51f13955e2e017761d646d32c313587e"
	I1029 08:22:56.105464   19205 cri.go:89] found id: "9b8691b1023f8ec0251c9d8bc6b0b7d0fd5fea3ab2e89770788c518fb6ca60da"
	I1029 08:22:56.105468   19205 cri.go:89] found id: "e573e53bb23e05e993a3f287fd82b620791eb65b7ea5b1a15f165ae62002efa6"
	I1029 08:22:56.105471   19205 cri.go:89] found id: "e8d80d0af78a66b1efd4b4234f27af0b707a55e687da42d583a0ccac1dbc984f"
	I1029 08:22:56.105474   19205 cri.go:89] found id: "2e36d72e127f635b90fd8546ab02d09ce6d8f7f6b4a95e36f8f801a783fecf9d"
	I1029 08:22:56.105477   19205 cri.go:89] found id: "0c6b81641534122ae540f448d067dbd84061141d3143381b755c3d7f7ca5a3ab"
	I1029 08:22:56.105479   19205 cri.go:89] found id: "65705ac3c758b666b1f55a5f8f6d8e4d57b32abe7b2d77235aee8212d4b5d20f"
	I1029 08:22:56.105482   19205 cri.go:89] found id: "1fe5195ca5ae1f93eaf51b4e0526a47a8bb3bfa8e424a07fb66a6a9f81e65eb9"
	I1029 08:22:56.105484   19205 cri.go:89] found id: "5371c6256fe4ea45e90d7c3585d7bb2374128bb029b65fdcf3fdd6c3cb545117"
	I1029 08:22:56.105490   19205 cri.go:89] found id: "9b0733b1c46f199694daba309c64792dced18ab0f2b83341c9bb8a464c908faa"
	I1029 08:22:56.105493   19205 cri.go:89] found id: "197632c3e4940a86b33c2118188a5166f3fe06ff6190898cb1f2df1fbe028d75"
	I1029 08:22:56.105495   19205 cri.go:89] found id: "59926386ecab3854c03a360c1116f1d37b0a73222f47ac804c45f9d2d657b8e4"
	I1029 08:22:56.105497   19205 cri.go:89] found id: "570224acd5072d8b3a0600cf1fcca347b6bbac8f4c411782d6c8460dce94d302"
	I1029 08:22:56.105500   19205 cri.go:89] found id: "217f45f262a57c7196f21057bc6190abc33ede848212337a47b452ce93dae999"
	I1029 08:22:56.105502   19205 cri.go:89] found id: "6a10f82f1439a50fe59f05dd964f098ffac531c82cea62036993200facf7e0e6"
	I1029 08:22:56.105509   19205 cri.go:89] found id: "ff1e52067a5c8509cce80abc4079064ad91518967401e8a972d0957857b94894"
	I1029 08:22:56.105512   19205 cri.go:89] found id: "ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db"
	I1029 08:22:56.105516   19205 cri.go:89] found id: "11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3"
	I1029 08:22:56.105518   19205 cri.go:89] found id: "2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326"
	I1029 08:22:56.105521   19205 cri.go:89] found id: "a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110"
	I1029 08:22:56.105523   19205 cri.go:89] found id: "90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67"
	I1029 08:22:56.105526   19205 cri.go:89] found id: "56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3"
	I1029 08:22:56.105534   19205 cri.go:89] found id: "a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1"
	I1029 08:22:56.105537   19205 cri.go:89] found id: "49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd"
	I1029 08:22:56.105539   19205 cri.go:89] found id: ""
	I1029 08:22:56.105576   19205 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:22:56.120900   19205 out.go:203] 
	W1029 08:22:56.122205   19205 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:22:56.122239   19205 out.go:285] * 
	* 
	W1029 08:22:56.125287   19205 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:22:56.126514   19205 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-306574 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.33s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.210915ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-306574 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-306574 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [35a7b831-b180-4d26-83c9-2fd40b82d259] Pending
helpers_test.go:352: "task-pv-pod" [35a7b831-b180-4d26-83c9-2fd40b82d259] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [35a7b831-b180-4d26-83c9-2fd40b82d259] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004047436s
addons_test.go:572: (dbg) Run:  kubectl --context addons-306574 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-306574 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-306574 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-306574 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-306574 delete pod task-pv-pod: (1.076464889s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-306574 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-306574 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-306574 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [63f59069-a6ee-4c87-82e5-74bf410bfe36] Pending
helpers_test.go:352: "task-pv-pod-restore" [63f59069-a6ee-4c87-82e5-74bf410bfe36] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [63f59069-a6ee-4c87-82e5-74bf410bfe36] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003036747s
addons_test.go:614: (dbg) Run:  kubectl --context addons-306574 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-306574 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-306574 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306574 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (252.229586ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:23:39.811265   21645 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:23:39.811664   21645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:39.811677   21645 out.go:374] Setting ErrFile to fd 2...
	I1029 08:23:39.811681   21645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:39.811920   21645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:23:39.812227   21645 mustload.go:66] Loading cluster: addons-306574
	I1029 08:23:39.812615   21645 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:39.812632   21645 addons.go:607] checking whether the cluster is paused
	I1029 08:23:39.812733   21645 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:39.812756   21645 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:23:39.813208   21645 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:23:39.831530   21645 ssh_runner.go:195] Run: systemctl --version
	I1029 08:23:39.831599   21645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:23:39.849438   21645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:23:39.950193   21645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:23:39.950271   21645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:23:39.980795   21645 cri.go:89] found id: "d99432f91672d8c1d47f840f261c261a51f13955e2e017761d646d32c313587e"
	I1029 08:23:39.980822   21645 cri.go:89] found id: "9b8691b1023f8ec0251c9d8bc6b0b7d0fd5fea3ab2e89770788c518fb6ca60da"
	I1029 08:23:39.980829   21645 cri.go:89] found id: "e573e53bb23e05e993a3f287fd82b620791eb65b7ea5b1a15f165ae62002efa6"
	I1029 08:23:39.980834   21645 cri.go:89] found id: "e8d80d0af78a66b1efd4b4234f27af0b707a55e687da42d583a0ccac1dbc984f"
	I1029 08:23:39.980839   21645 cri.go:89] found id: "2e36d72e127f635b90fd8546ab02d09ce6d8f7f6b4a95e36f8f801a783fecf9d"
	I1029 08:23:39.980844   21645 cri.go:89] found id: "0c6b81641534122ae540f448d067dbd84061141d3143381b755c3d7f7ca5a3ab"
	I1029 08:23:39.980846   21645 cri.go:89] found id: "65705ac3c758b666b1f55a5f8f6d8e4d57b32abe7b2d77235aee8212d4b5d20f"
	I1029 08:23:39.980849   21645 cri.go:89] found id: "1fe5195ca5ae1f93eaf51b4e0526a47a8bb3bfa8e424a07fb66a6a9f81e65eb9"
	I1029 08:23:39.980852   21645 cri.go:89] found id: "5371c6256fe4ea45e90d7c3585d7bb2374128bb029b65fdcf3fdd6c3cb545117"
	I1029 08:23:39.980868   21645 cri.go:89] found id: "9b0733b1c46f199694daba309c64792dced18ab0f2b83341c9bb8a464c908faa"
	I1029 08:23:39.980875   21645 cri.go:89] found id: "197632c3e4940a86b33c2118188a5166f3fe06ff6190898cb1f2df1fbe028d75"
	I1029 08:23:39.980878   21645 cri.go:89] found id: "59926386ecab3854c03a360c1116f1d37b0a73222f47ac804c45f9d2d657b8e4"
	I1029 08:23:39.980880   21645 cri.go:89] found id: "570224acd5072d8b3a0600cf1fcca347b6bbac8f4c411782d6c8460dce94d302"
	I1029 08:23:39.980883   21645 cri.go:89] found id: "217f45f262a57c7196f21057bc6190abc33ede848212337a47b452ce93dae999"
	I1029 08:23:39.980886   21645 cri.go:89] found id: "6a10f82f1439a50fe59f05dd964f098ffac531c82cea62036993200facf7e0e6"
	I1029 08:23:39.980891   21645 cri.go:89] found id: "ff1e52067a5c8509cce80abc4079064ad91518967401e8a972d0957857b94894"
	I1029 08:23:39.980898   21645 cri.go:89] found id: "ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db"
	I1029 08:23:39.980905   21645 cri.go:89] found id: "11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3"
	I1029 08:23:39.980909   21645 cri.go:89] found id: "2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326"
	I1029 08:23:39.980913   21645 cri.go:89] found id: "a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110"
	I1029 08:23:39.980918   21645 cri.go:89] found id: "90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67"
	I1029 08:23:39.980922   21645 cri.go:89] found id: "56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3"
	I1029 08:23:39.980925   21645 cri.go:89] found id: "a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1"
	I1029 08:23:39.980937   21645 cri.go:89] found id: "49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd"
	I1029 08:23:39.980943   21645 cri.go:89] found id: ""
	I1029 08:23:39.981003   21645 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:23:39.996156   21645 out.go:203] 
	W1029 08:23:39.997495   21645 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:23:39.997519   21645 out.go:285] * 
	* 
	W1029 08:23:40.000521   21645 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:23:40.003819   21645 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-306574 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306574 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (254.013782ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:23:40.066924   21707 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:23:40.067244   21707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:40.067255   21707 out.go:374] Setting ErrFile to fd 2...
	I1029 08:23:40.067260   21707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:40.067488   21707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:23:40.067737   21707 mustload.go:66] Loading cluster: addons-306574
	I1029 08:23:40.068110   21707 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:40.068126   21707 addons.go:607] checking whether the cluster is paused
	I1029 08:23:40.068212   21707 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:40.068228   21707 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:23:40.068615   21707 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:23:40.087282   21707 ssh_runner.go:195] Run: systemctl --version
	I1029 08:23:40.087340   21707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:23:40.105608   21707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:23:40.205918   21707 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:23:40.206019   21707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:23:40.237588   21707 cri.go:89] found id: "d99432f91672d8c1d47f840f261c261a51f13955e2e017761d646d32c313587e"
	I1029 08:23:40.237614   21707 cri.go:89] found id: "9b8691b1023f8ec0251c9d8bc6b0b7d0fd5fea3ab2e89770788c518fb6ca60da"
	I1029 08:23:40.237618   21707 cri.go:89] found id: "e573e53bb23e05e993a3f287fd82b620791eb65b7ea5b1a15f165ae62002efa6"
	I1029 08:23:40.237621   21707 cri.go:89] found id: "e8d80d0af78a66b1efd4b4234f27af0b707a55e687da42d583a0ccac1dbc984f"
	I1029 08:23:40.237624   21707 cri.go:89] found id: "2e36d72e127f635b90fd8546ab02d09ce6d8f7f6b4a95e36f8f801a783fecf9d"
	I1029 08:23:40.237627   21707 cri.go:89] found id: "0c6b81641534122ae540f448d067dbd84061141d3143381b755c3d7f7ca5a3ab"
	I1029 08:23:40.237630   21707 cri.go:89] found id: "65705ac3c758b666b1f55a5f8f6d8e4d57b32abe7b2d77235aee8212d4b5d20f"
	I1029 08:23:40.237632   21707 cri.go:89] found id: "1fe5195ca5ae1f93eaf51b4e0526a47a8bb3bfa8e424a07fb66a6a9f81e65eb9"
	I1029 08:23:40.237635   21707 cri.go:89] found id: "5371c6256fe4ea45e90d7c3585d7bb2374128bb029b65fdcf3fdd6c3cb545117"
	I1029 08:23:40.237639   21707 cri.go:89] found id: "9b0733b1c46f199694daba309c64792dced18ab0f2b83341c9bb8a464c908faa"
	I1029 08:23:40.237642   21707 cri.go:89] found id: "197632c3e4940a86b33c2118188a5166f3fe06ff6190898cb1f2df1fbe028d75"
	I1029 08:23:40.237644   21707 cri.go:89] found id: "59926386ecab3854c03a360c1116f1d37b0a73222f47ac804c45f9d2d657b8e4"
	I1029 08:23:40.237647   21707 cri.go:89] found id: "570224acd5072d8b3a0600cf1fcca347b6bbac8f4c411782d6c8460dce94d302"
	I1029 08:23:40.237649   21707 cri.go:89] found id: "217f45f262a57c7196f21057bc6190abc33ede848212337a47b452ce93dae999"
	I1029 08:23:40.237652   21707 cri.go:89] found id: "6a10f82f1439a50fe59f05dd964f098ffac531c82cea62036993200facf7e0e6"
	I1029 08:23:40.237660   21707 cri.go:89] found id: "ff1e52067a5c8509cce80abc4079064ad91518967401e8a972d0957857b94894"
	I1029 08:23:40.237662   21707 cri.go:89] found id: "ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db"
	I1029 08:23:40.237667   21707 cri.go:89] found id: "11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3"
	I1029 08:23:40.237669   21707 cri.go:89] found id: "2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326"
	I1029 08:23:40.237672   21707 cri.go:89] found id: "a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110"
	I1029 08:23:40.237674   21707 cri.go:89] found id: "90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67"
	I1029 08:23:40.237676   21707 cri.go:89] found id: "56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3"
	I1029 08:23:40.237679   21707 cri.go:89] found id: "a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1"
	I1029 08:23:40.237681   21707 cri.go:89] found id: "49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd"
	I1029 08:23:40.237684   21707 cri.go:89] found id: ""
	I1029 08:23:40.237729   21707 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:23:40.252542   21707 out.go:203] 
	W1029 08:23:40.253911   21707 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:23:40.253936   21707 out.go:285] * 
	* 
	W1029 08:23:40.257078   21707 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:23:40.258539   21707 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-306574 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (49.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-306574 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-306574 --alsologtostderr -v=1: exit status 11 (250.699668ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:22:45.590485   17573 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:22:45.590807   17573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:45.590818   17573 out.go:374] Setting ErrFile to fd 2...
	I1029 08:22:45.590823   17573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:45.591044   17573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:22:45.591331   17573 mustload.go:66] Loading cluster: addons-306574
	I1029 08:22:45.591660   17573 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:45.591672   17573 addons.go:607] checking whether the cluster is paused
	I1029 08:22:45.591751   17573 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:45.591766   17573 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:22:45.592129   17573 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:22:45.610715   17573 ssh_runner.go:195] Run: systemctl --version
	I1029 08:22:45.610768   17573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:22:45.629522   17573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:22:45.728715   17573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:22:45.728780   17573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:22:45.758800   17573 cri.go:89] found id: "d99432f91672d8c1d47f840f261c261a51f13955e2e017761d646d32c313587e"
	I1029 08:22:45.758835   17573 cri.go:89] found id: "9b8691b1023f8ec0251c9d8bc6b0b7d0fd5fea3ab2e89770788c518fb6ca60da"
	I1029 08:22:45.758839   17573 cri.go:89] found id: "e573e53bb23e05e993a3f287fd82b620791eb65b7ea5b1a15f165ae62002efa6"
	I1029 08:22:45.758843   17573 cri.go:89] found id: "e8d80d0af78a66b1efd4b4234f27af0b707a55e687da42d583a0ccac1dbc984f"
	I1029 08:22:45.758846   17573 cri.go:89] found id: "2e36d72e127f635b90fd8546ab02d09ce6d8f7f6b4a95e36f8f801a783fecf9d"
	I1029 08:22:45.758850   17573 cri.go:89] found id: "0c6b81641534122ae540f448d067dbd84061141d3143381b755c3d7f7ca5a3ab"
	I1029 08:22:45.758853   17573 cri.go:89] found id: "65705ac3c758b666b1f55a5f8f6d8e4d57b32abe7b2d77235aee8212d4b5d20f"
	I1029 08:22:45.758856   17573 cri.go:89] found id: "1fe5195ca5ae1f93eaf51b4e0526a47a8bb3bfa8e424a07fb66a6a9f81e65eb9"
	I1029 08:22:45.758858   17573 cri.go:89] found id: "5371c6256fe4ea45e90d7c3585d7bb2374128bb029b65fdcf3fdd6c3cb545117"
	I1029 08:22:45.758869   17573 cri.go:89] found id: "9b0733b1c46f199694daba309c64792dced18ab0f2b83341c9bb8a464c908faa"
	I1029 08:22:45.758872   17573 cri.go:89] found id: "197632c3e4940a86b33c2118188a5166f3fe06ff6190898cb1f2df1fbe028d75"
	I1029 08:22:45.758875   17573 cri.go:89] found id: "59926386ecab3854c03a360c1116f1d37b0a73222f47ac804c45f9d2d657b8e4"
	I1029 08:22:45.758877   17573 cri.go:89] found id: "570224acd5072d8b3a0600cf1fcca347b6bbac8f4c411782d6c8460dce94d302"
	I1029 08:22:45.758879   17573 cri.go:89] found id: "217f45f262a57c7196f21057bc6190abc33ede848212337a47b452ce93dae999"
	I1029 08:22:45.758883   17573 cri.go:89] found id: "6a10f82f1439a50fe59f05dd964f098ffac531c82cea62036993200facf7e0e6"
	I1029 08:22:45.758889   17573 cri.go:89] found id: "ff1e52067a5c8509cce80abc4079064ad91518967401e8a972d0957857b94894"
	I1029 08:22:45.758894   17573 cri.go:89] found id: "ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db"
	I1029 08:22:45.758898   17573 cri.go:89] found id: "11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3"
	I1029 08:22:45.758901   17573 cri.go:89] found id: "2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326"
	I1029 08:22:45.758903   17573 cri.go:89] found id: "a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110"
	I1029 08:22:45.758905   17573 cri.go:89] found id: "90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67"
	I1029 08:22:45.758908   17573 cri.go:89] found id: "56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3"
	I1029 08:22:45.758910   17573 cri.go:89] found id: "a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1"
	I1029 08:22:45.758913   17573 cri.go:89] found id: "49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd"
	I1029 08:22:45.758916   17573 cri.go:89] found id: ""
	I1029 08:22:45.758961   17573 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:22:45.774125   17573 out.go:203] 
	W1029 08:22:45.775136   17573 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:22:45.775158   17573 out.go:285] * 
	* 
	W1029 08:22:45.778076   17573 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:22:45.779345   17573 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-306574 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-306574
helpers_test.go:243: (dbg) docker inspect addons-306574:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "64d25cf53f5bfb5e01ec026ef15e2dbd60d95b3a435ac8db06862165e005aa1c",
	        "Created": "2025-10-29T08:20:31.838258712Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 9193,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T08:20:31.883270971Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/64d25cf53f5bfb5e01ec026ef15e2dbd60d95b3a435ac8db06862165e005aa1c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/64d25cf53f5bfb5e01ec026ef15e2dbd60d95b3a435ac8db06862165e005aa1c/hostname",
	        "HostsPath": "/var/lib/docker/containers/64d25cf53f5bfb5e01ec026ef15e2dbd60d95b3a435ac8db06862165e005aa1c/hosts",
	        "LogPath": "/var/lib/docker/containers/64d25cf53f5bfb5e01ec026ef15e2dbd60d95b3a435ac8db06862165e005aa1c/64d25cf53f5bfb5e01ec026ef15e2dbd60d95b3a435ac8db06862165e005aa1c-json.log",
	        "Name": "/addons-306574",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-306574:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-306574",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "64d25cf53f5bfb5e01ec026ef15e2dbd60d95b3a435ac8db06862165e005aa1c",
	                "LowerDir": "/var/lib/docker/overlay2/582dfceb6524e2af206343427a7d6df2b0c2f63bddc0a11f512404555061131a-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/582dfceb6524e2af206343427a7d6df2b0c2f63bddc0a11f512404555061131a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/582dfceb6524e2af206343427a7d6df2b0c2f63bddc0a11f512404555061131a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/582dfceb6524e2af206343427a7d6df2b0c2f63bddc0a11f512404555061131a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-306574",
	                "Source": "/var/lib/docker/volumes/addons-306574/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-306574",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-306574",
	                "name.minikube.sigs.k8s.io": "addons-306574",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08a554e130cebdb108114b821dda4c7a12a11e230f8aa02b9bdf9687d2909484",
	            "SandboxKey": "/var/run/docker/netns/08a554e130ce",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-306574": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:e5:6f:ca:dd:d3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "829551abfb51934fbddb1ff5d175377487c2b351054b25edadba7ad2f11d302a",
	                    "EndpointID": "dc28ab56d90a0d083579dad71621695248a8a160dbd9fbdf1bca6ec3a985dc31",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-306574",
	                        "64d25cf53f5b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-306574 -n addons-306574
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-306574 logs -n 25: (1.170302219s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-682324 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-682324   │ jenkins │ v1.37.0 │ 29 Oct 25 08:19 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ delete  │ -p download-only-682324                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-682324   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ start   │ -o=json --download-only -p download-only-360816 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-360816   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ delete  │ -p download-only-360816                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-360816   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ delete  │ -p download-only-682324                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-682324   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ delete  │ -p download-only-360816                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-360816   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ start   │ --download-only -p download-docker-695934 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-695934 │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ delete  │ -p download-docker-695934                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-695934 │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ start   │ --download-only -p binary-mirror-110152 --alsologtostderr --binary-mirror http://127.0.0.1:45439 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-110152   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ delete  │ -p binary-mirror-110152                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-110152   │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ addons  │ enable dashboard -p addons-306574                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-306574          │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ addons  │ disable dashboard -p addons-306574                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-306574          │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ start   │ -p addons-306574 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-306574          │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:22 UTC │
	│ addons  │ addons-306574 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-306574          │ jenkins │ v1.37.0 │ 29 Oct 25 08:22 UTC │                     │
	│ addons  │ addons-306574 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-306574          │ jenkins │ v1.37.0 │ 29 Oct 25 08:22 UTC │                     │
	│ addons  │ enable headlamp -p addons-306574 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-306574          │ jenkins │ v1.37.0 │ 29 Oct 25 08:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 08:20:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 08:20:09.906757    8556 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:20:09.906896    8556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:20:09.906912    8556 out.go:374] Setting ErrFile to fd 2...
	I1029 08:20:09.906918    8556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:20:09.907149    8556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:20:09.907700    8556 out.go:368] Setting JSON to false
	I1029 08:20:09.908542    8556 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":158,"bootTime":1761725852,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 08:20:09.908624    8556 start.go:143] virtualization: kvm guest
	I1029 08:20:09.910438    8556 out.go:179] * [addons-306574] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 08:20:09.911971    8556 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:20:09.912004    8556 notify.go:221] Checking for updates...
	I1029 08:20:09.914600    8556 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:20:09.915963    8556 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 08:20:09.917359    8556 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 08:20:09.918693    8556 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 08:20:09.919979    8556 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:20:09.921496    8556 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:20:09.944084    8556 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 08:20:09.944169    8556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:20:10.000881    8556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-29 08:20:09.99117085 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 08:20:10.001037    8556 docker.go:319] overlay module found
	I1029 08:20:10.002704    8556 out.go:179] * Using the docker driver based on user configuration
	I1029 08:20:10.003865    8556 start.go:309] selected driver: docker
	I1029 08:20:10.003892    8556 start.go:930] validating driver "docker" against <nil>
	I1029 08:20:10.003906    8556 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:20:10.004613    8556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:20:10.060400    8556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-29 08:20:10.051126923 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 08:20:10.060598    8556 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 08:20:10.060811    8556 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:20:10.062305    8556 out.go:179] * Using Docker driver with root privileges
	I1029 08:20:10.063410    8556 cni.go:84] Creating CNI manager for ""
	I1029 08:20:10.063482    8556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:20:10.063495    8556 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 08:20:10.063561    8556 start.go:353] cluster config:
	{Name:addons-306574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-306574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1029 08:20:10.064973    8556 out.go:179] * Starting "addons-306574" primary control-plane node in "addons-306574" cluster
	I1029 08:20:10.066316    8556 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 08:20:10.067638    8556 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 08:20:10.068808    8556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:20:10.068851    8556 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 08:20:10.068861    8556 cache.go:59] Caching tarball of preloaded images
	I1029 08:20:10.068911    8556 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 08:20:10.068935    8556 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 08:20:10.068943    8556 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:20:10.069303    8556 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/config.json ...
	I1029 08:20:10.069330    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/config.json: {Name:mk5f603a5977d4732cb43592e784826e5c098291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:10.086271    8556 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1029 08:20:10.086383    8556 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1029 08:20:10.086400    8556 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1029 08:20:10.086404    8556 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1029 08:20:10.086411    8556 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1029 08:20:10.086418    8556 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1029 08:20:23.357314    8556 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1029 08:20:23.357364    8556 cache.go:233] Successfully downloaded all kic artifacts
	I1029 08:20:23.357397    8556 start.go:360] acquireMachinesLock for addons-306574: {Name:mkb2bc35c8399927cc17b5ede24d6fc9e49bd344 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:20:23.357506    8556 start.go:364] duration metric: took 92.938µs to acquireMachinesLock for "addons-306574"
	I1029 08:20:23.357533    8556 start.go:93] Provisioning new machine with config: &{Name:addons-306574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-306574 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:20:23.357621    8556 start.go:125] createHost starting for "" (driver="docker")
	I1029 08:20:23.359355    8556 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1029 08:20:23.359596    8556 start.go:159] libmachine.API.Create for "addons-306574" (driver="docker")
	I1029 08:20:23.359631    8556 client.go:173] LocalClient.Create starting
	I1029 08:20:23.359735    8556 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem
	I1029 08:20:23.528657    8556 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem
	I1029 08:20:23.910424    8556 cli_runner.go:164] Run: docker network inspect addons-306574 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1029 08:20:23.927823    8556 cli_runner.go:211] docker network inspect addons-306574 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1029 08:20:23.927894    8556 network_create.go:284] running [docker network inspect addons-306574] to gather additional debugging logs...
	I1029 08:20:23.927918    8556 cli_runner.go:164] Run: docker network inspect addons-306574
	W1029 08:20:23.945148    8556 cli_runner.go:211] docker network inspect addons-306574 returned with exit code 1
	I1029 08:20:23.945177    8556 network_create.go:287] error running [docker network inspect addons-306574]: docker network inspect addons-306574: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-306574 not found
	I1029 08:20:23.945200    8556 network_create.go:289] output of [docker network inspect addons-306574]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-306574 not found
	
	** /stderr **
	I1029 08:20:23.945328    8556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:20:23.962934    8556 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d27c70}
	I1029 08:20:23.962982    8556 network_create.go:124] attempt to create docker network addons-306574 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1029 08:20:23.963053    8556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-306574 addons-306574
	I1029 08:20:24.023583    8556 network_create.go:108] docker network addons-306574 192.168.49.0/24 created
	I1029 08:20:24.023615    8556 kic.go:121] calculated static IP "192.168.49.2" for the "addons-306574" container
	I1029 08:20:24.023685    8556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1029 08:20:24.042007    8556 cli_runner.go:164] Run: docker volume create addons-306574 --label name.minikube.sigs.k8s.io=addons-306574 --label created_by.minikube.sigs.k8s.io=true
	I1029 08:20:24.060146    8556 oci.go:103] Successfully created a docker volume addons-306574
	I1029 08:20:24.060229    8556 cli_runner.go:164] Run: docker run --rm --name addons-306574-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-306574 --entrypoint /usr/bin/test -v addons-306574:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1029 08:20:27.317149    8556 cli_runner.go:217] Completed: docker run --rm --name addons-306574-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-306574 --entrypoint /usr/bin/test -v addons-306574:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (3.256885266s)
	I1029 08:20:27.317186    8556 oci.go:107] Successfully prepared a docker volume addons-306574
	I1029 08:20:27.317212    8556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:20:27.317237    8556 kic.go:194] Starting extracting preloaded images to volume ...
	I1029 08:20:27.317299    8556 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-306574:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1029 08:20:31.763947    8556 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-306574:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.446598915s)
	I1029 08:20:31.763977    8556 kic.go:203] duration metric: took 4.446738239s to extract preloaded images to volume ...
	W1029 08:20:31.764089    8556 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1029 08:20:31.764129    8556 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1029 08:20:31.764166    8556 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1029 08:20:31.821753    8556 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-306574 --name addons-306574 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-306574 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-306574 --network addons-306574 --ip 192.168.49.2 --volume addons-306574:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1029 08:20:32.133867    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Running}}
	I1029 08:20:32.154974    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:32.174763    8556 cli_runner.go:164] Run: docker exec addons-306574 stat /var/lib/dpkg/alternatives/iptables
	I1029 08:20:32.221358    8556 oci.go:144] the created container "addons-306574" has a running status.
	I1029 08:20:32.221394    8556 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa...
	I1029 08:20:32.374637    8556 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1029 08:20:32.403505    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:32.421721    8556 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1029 08:20:32.421744    8556 kic_runner.go:114] Args: [docker exec --privileged addons-306574 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1029 08:20:32.477582    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:32.500046    8556 machine.go:94] provisionDockerMachine start ...
	I1029 08:20:32.500144    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:32.521940    8556 main.go:143] libmachine: Using SSH client type: native
	I1029 08:20:32.522290    8556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1029 08:20:32.522307    8556 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:20:32.667953    8556 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-306574
	
	I1029 08:20:32.668010    8556 ubuntu.go:182] provisioning hostname "addons-306574"
	I1029 08:20:32.668095    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:32.687731    8556 main.go:143] libmachine: Using SSH client type: native
	I1029 08:20:32.687980    8556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1029 08:20:32.688014    8556 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-306574 && echo "addons-306574" | sudo tee /etc/hostname
	I1029 08:20:32.841191    8556 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-306574
	
	I1029 08:20:32.841266    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:32.861731    8556 main.go:143] libmachine: Using SSH client type: native
	I1029 08:20:32.861983    8556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1029 08:20:32.862021    8556 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-306574' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-306574/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-306574' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:20:33.003899    8556 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:20:33.003933    8556 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-3727/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-3727/.minikube}
	I1029 08:20:33.003974    8556 ubuntu.go:190] setting up certificates
	I1029 08:20:33.004006    8556 provision.go:84] configureAuth start
	I1029 08:20:33.004078    8556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-306574
	I1029 08:20:33.022143    8556 provision.go:143] copyHostCerts
	I1029 08:20:33.022216    8556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem (1078 bytes)
	I1029 08:20:33.022343    8556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem (1123 bytes)
	I1029 08:20:33.022403    8556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem (1679 bytes)
	I1029 08:20:33.022459    8556 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem org=jenkins.addons-306574 san=[127.0.0.1 192.168.49.2 addons-306574 localhost minikube]
	I1029 08:20:33.253245    8556 provision.go:177] copyRemoteCerts
	I1029 08:20:33.253302    8556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:20:33.253335    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:33.271402    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:33.372257    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1029 08:20:33.392440    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1029 08:20:33.410707    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1029 08:20:33.428466    8556 provision.go:87] duration metric: took 424.443345ms to configureAuth
	I1029 08:20:33.428490    8556 ubuntu.go:206] setting minikube options for container-runtime
	I1029 08:20:33.428681    8556 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:20:33.428798    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:33.448437    8556 main.go:143] libmachine: Using SSH client type: native
	I1029 08:20:33.448676    8556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1029 08:20:33.448701    8556 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:20:33.703864    8556 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:20:33.703893    8556 machine.go:97] duration metric: took 1.203820217s to provisionDockerMachine
	I1029 08:20:33.703906    8556 client.go:176] duration metric: took 10.344268871s to LocalClient.Create
	I1029 08:20:33.703933    8556 start.go:167] duration metric: took 10.344336656s to libmachine.API.Create "addons-306574"
	I1029 08:20:33.703944    8556 start.go:293] postStartSetup for "addons-306574" (driver="docker")
	I1029 08:20:33.703957    8556 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:20:33.704039    8556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:20:33.704089    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:33.722678    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:33.825369    8556 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:20:33.829137    8556 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 08:20:33.829164    8556 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 08:20:33.829175    8556 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/addons for local assets ...
	I1029 08:20:33.829257    8556 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/files for local assets ...
	I1029 08:20:33.829294    8556 start.go:296] duration metric: took 125.343097ms for postStartSetup
	I1029 08:20:33.829674    8556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-306574
	I1029 08:20:33.847441    8556 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/config.json ...
	I1029 08:20:33.847735    8556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:20:33.847784    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:33.864929    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:33.963295    8556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 08:20:33.967828    8556 start.go:128] duration metric: took 10.610190443s to createHost
	I1029 08:20:33.967855    8556 start.go:83] releasing machines lock for "addons-306574", held for 10.610336125s
	I1029 08:20:33.967918    8556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-306574
	I1029 08:20:33.985784    8556 ssh_runner.go:195] Run: cat /version.json
	I1029 08:20:33.985840    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:33.985854    8556 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:20:33.985915    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:34.004200    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:34.008194    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:34.102298    8556 ssh_runner.go:195] Run: systemctl --version
	I1029 08:20:34.163972    8556 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:20:34.200580    8556 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:20:34.205362    8556 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:20:34.205431    8556 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:20:34.232483    8556 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1029 08:20:34.232510    8556 start.go:496] detecting cgroup driver to use...
	I1029 08:20:34.232542    8556 detect.go:190] detected "systemd" cgroup driver on host os
	I1029 08:20:34.232586    8556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:20:34.248262    8556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:20:34.260721    8556 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:20:34.260769    8556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:20:34.277484    8556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:20:34.296115    8556 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:20:34.378206    8556 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:20:34.465971    8556 docker.go:234] disabling docker service ...
	I1029 08:20:34.466051    8556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:20:34.483897    8556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:20:34.496865    8556 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:20:34.581350    8556 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:20:34.662554    8556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:20:34.674694    8556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:20:34.688350    8556 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:20:34.688401    8556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:20:34.699123    8556 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1029 08:20:34.699181    8556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:20:34.708321    8556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:20:34.717640    8556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:20:34.726743    8556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:20:34.735766    8556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:20:34.745125    8556 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:20:34.759351    8556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:20:34.768763    8556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:20:34.776273    8556 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1029 08:20:34.776339    8556 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1029 08:20:34.789191    8556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:20:34.797829    8556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:20:34.874729    8556 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:20:34.985627    8556 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:20:34.985702    8556 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:20:34.989762    8556 start.go:564] Will wait 60s for crictl version
	I1029 08:20:34.989815    8556 ssh_runner.go:195] Run: which crictl
	I1029 08:20:34.993493    8556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 08:20:35.018217    8556 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 08:20:35.018341    8556 ssh_runner.go:195] Run: crio --version
	I1029 08:20:35.046556    8556 ssh_runner.go:195] Run: crio --version
	I1029 08:20:35.075415    8556 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 08:20:35.076578    8556 cli_runner.go:164] Run: docker network inspect addons-306574 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 08:20:35.093360    8556 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1029 08:20:35.097529    8556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:20:35.108105    8556 kubeadm.go:884] updating cluster {Name:addons-306574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-306574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 08:20:35.108213    8556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:20:35.108263    8556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:20:35.139296    8556 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:20:35.139319    8556 crio.go:433] Images already preloaded, skipping extraction
	I1029 08:20:35.139377    8556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:20:35.164831    8556 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:20:35.164857    8556 cache_images.go:86] Images are preloaded, skipping loading
	I1029 08:20:35.164866    8556 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1029 08:20:35.164960    8556 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-306574 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-306574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:20:35.165047    8556 ssh_runner.go:195] Run: crio config
	I1029 08:20:35.208252    8556 cni.go:84] Creating CNI manager for ""
	I1029 08:20:35.208276    8556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:20:35.208297    8556 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 08:20:35.208318    8556 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-306574 NodeName:addons-306574 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 08:20:35.208454    8556 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-306574"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 08:20:35.208513    8556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:20:35.216691    8556 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:20:35.216777    8556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 08:20:35.224812    8556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1029 08:20:35.238314    8556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:20:35.254187    8556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1029 08:20:35.266650    8556 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1029 08:20:35.270351    8556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:20:35.279970    8556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:20:35.360884    8556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:20:35.387704    8556 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574 for IP: 192.168.49.2
	I1029 08:20:35.387732    8556 certs.go:195] generating shared ca certs ...
	I1029 08:20:35.387754    8556 certs.go:227] acquiring lock for ca certs: {Name:mk2fcaaead4b0fcf1dc2cfc80d95b3cc12092f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:35.387963    8556 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key
	I1029 08:20:35.648830    8556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt ...
	I1029 08:20:35.648867    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt: {Name:mk19434f5fe1032a86a95cec63e899c58bd71e68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:35.649101    8556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key ...
	I1029 08:20:35.649120    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key: {Name:mkf48a9e65e1fc5deb4dbacbb470b77a0ea967b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:35.649230    8556 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key
	I1029 08:20:36.202367    8556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt ...
	I1029 08:20:36.202411    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt: {Name:mk90b59f269020c36a09edabc548ec68458d54fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:36.202601    8556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key ...
	I1029 08:20:36.202616    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key: {Name:mkb7777a792c226f0bdd072bde419b5711b07f48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:36.202686    8556 certs.go:257] generating profile certs ...
	I1029 08:20:36.202741    8556 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.key
	I1029 08:20:36.202757    8556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt with IP's: []
	I1029 08:20:36.438899    8556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt ...
	I1029 08:20:36.438934    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: {Name:mkdb3c610cb943dacfd4b86491b16143782c58e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:36.439139    8556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.key ...
	I1029 08:20:36.439154    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.key: {Name:mk4714e932773a8b002dca10872328e2ffd71de1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:36.439223    8556 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.key.5c4d1c1a
	I1029 08:20:36.439242    8556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.crt.5c4d1c1a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1029 08:20:36.997184    8556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.crt.5c4d1c1a ...
	I1029 08:20:36.997217    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.crt.5c4d1c1a: {Name:mk49fb4a8ef797f5a910f20b574ebbef85fb6c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:36.997394    8556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.key.5c4d1c1a ...
	I1029 08:20:36.997408    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.key.5c4d1c1a: {Name:mkebc158fb51d02e09da9dfd7eb30396310b38f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:36.997486    8556 certs.go:382] copying /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.crt.5c4d1c1a -> /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.crt
	I1029 08:20:36.997590    8556 certs.go:386] copying /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.key.5c4d1c1a -> /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.key
	I1029 08:20:36.997653    8556 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/proxy-client.key
	I1029 08:20:36.997671    8556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/proxy-client.crt with IP's: []
	I1029 08:20:37.106436    8556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/proxy-client.crt ...
	I1029 08:20:37.106465    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/proxy-client.crt: {Name:mk587246a2c54d75286b921b755d7486a9e60cb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:37.106640    8556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/proxy-client.key ...
	I1029 08:20:37.106653    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/proxy-client.key: {Name:mk3c2ef6b48338ba8edc08369dd323973bd8b0e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:37.106826    8556 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem (1675 bytes)
	I1029 08:20:37.106863    8556 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem (1078 bytes)
	I1029 08:20:37.106883    8556 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:20:37.106908    8556 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem (1679 bytes)
	I1029 08:20:37.107465    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:20:37.125245    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 08:20:37.143342    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:20:37.161691    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1029 08:20:37.179846    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1029 08:20:37.198409    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:20:37.216554    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:20:37.235353    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 08:20:37.253690    8556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:20:37.274111    8556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 08:20:37.286938    8556 ssh_runner.go:195] Run: openssl version
	I1029 08:20:37.293253    8556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:20:37.304949    8556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:20:37.309128    8556 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:20:37.309182    8556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:20:37.342795    8556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:20:37.352034    8556 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:20:37.355987    8556 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1029 08:20:37.356083    8556 kubeadm.go:401] StartCluster: {Name:addons-306574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-306574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:20:37.356149    8556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:20:37.356191    8556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:20:37.383560    8556 cri.go:89] found id: ""
	I1029 08:20:37.383617    8556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 08:20:37.391909    8556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1029 08:20:37.400152    8556 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1029 08:20:37.400220    8556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1029 08:20:37.408405    8556 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1029 08:20:37.408445    8556 kubeadm.go:158] found existing configuration files:
	
	I1029 08:20:37.408503    8556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1029 08:20:37.416365    8556 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1029 08:20:37.416432    8556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1029 08:20:37.424158    8556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1029 08:20:37.432020    8556 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1029 08:20:37.432092    8556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1029 08:20:37.439838    8556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1029 08:20:37.447677    8556 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1029 08:20:37.447741    8556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1029 08:20:37.455342    8556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1029 08:20:37.463198    8556 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1029 08:20:37.463273    8556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1029 08:20:37.471066    8556 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1029 08:20:37.528754    8556 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1029 08:20:37.585800    8556 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1029 08:20:47.483196    8556 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1029 08:20:47.483272    8556 kubeadm.go:319] [preflight] Running pre-flight checks
	I1029 08:20:47.483355    8556 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1029 08:20:47.483402    8556 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1029 08:20:47.483435    8556 kubeadm.go:319] OS: Linux
	I1029 08:20:47.483476    8556 kubeadm.go:319] CGROUPS_CPU: enabled
	I1029 08:20:47.483559    8556 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1029 08:20:47.483633    8556 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1029 08:20:47.483674    8556 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1029 08:20:47.483731    8556 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1029 08:20:47.483808    8556 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1029 08:20:47.483884    8556 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1029 08:20:47.483936    8556 kubeadm.go:319] CGROUPS_IO: enabled
	I1029 08:20:47.484035    8556 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1029 08:20:47.484126    8556 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1029 08:20:47.484248    8556 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1029 08:20:47.484353    8556 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1029 08:20:47.486070    8556 out.go:252]   - Generating certificates and keys ...
	I1029 08:20:47.486171    8556 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1029 08:20:47.486276    8556 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1029 08:20:47.486376    8556 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1029 08:20:47.486476    8556 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1029 08:20:47.486582    8556 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1029 08:20:47.486669    8556 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1029 08:20:47.486753    8556 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1029 08:20:47.486920    8556 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-306574 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1029 08:20:47.487097    8556 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1029 08:20:47.487292    8556 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-306574 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1029 08:20:47.487421    8556 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1029 08:20:47.487534    8556 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 08:20:47.487609    8556 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 08:20:47.487711    8556 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 08:20:47.487797    8556 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 08:20:47.487886    8556 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1029 08:20:47.487985    8556 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 08:20:47.488094    8556 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 08:20:47.488151    8556 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 08:20:47.488235    8556 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 08:20:47.488296    8556 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 08:20:47.489651    8556 out.go:252]   - Booting up control plane ...
	I1029 08:20:47.489758    8556 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 08:20:47.489884    8556 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 08:20:47.489982    8556 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 08:20:47.490114    8556 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 08:20:47.490236    8556 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1029 08:20:47.490346    8556 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1029 08:20:47.490426    8556 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 08:20:47.490461    8556 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 08:20:47.490577    8556 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1029 08:20:47.490750    8556 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1029 08:20:47.490808    8556 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00181545s
	I1029 08:20:47.490942    8556 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1029 08:20:47.491082    8556 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1029 08:20:47.491215    8556 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1029 08:20:47.491296    8556 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1029 08:20:47.491365    8556 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.222338032s
	I1029 08:20:47.491426    8556 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.246054752s
	I1029 08:20:47.491484    8556 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001295374s
	I1029 08:20:47.491582    8556 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1029 08:20:47.491759    8556 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1029 08:20:47.491851    8556 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1029 08:20:47.492133    8556 kubeadm.go:319] [mark-control-plane] Marking the node addons-306574 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1029 08:20:47.492220    8556 kubeadm.go:319] [bootstrap-token] Using token: r5alvc.l4xv78fw0ie8bk9r
	I1029 08:20:47.494398    8556 out.go:252]   - Configuring RBAC rules ...
	I1029 08:20:47.494524    8556 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1029 08:20:47.494597    8556 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1029 08:20:47.494740    8556 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1029 08:20:47.494849    8556 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1029 08:20:47.494961    8556 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1029 08:20:47.495069    8556 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1029 08:20:47.495214    8556 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1029 08:20:47.495271    8556 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1029 08:20:47.495335    8556 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1029 08:20:47.495351    8556 kubeadm.go:319] 
	I1029 08:20:47.495415    8556 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1029 08:20:47.495425    8556 kubeadm.go:319] 
	I1029 08:20:47.495512    8556 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1029 08:20:47.495518    8556 kubeadm.go:319] 
	I1029 08:20:47.495544    8556 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1029 08:20:47.495631    8556 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1029 08:20:47.495685    8556 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1029 08:20:47.495692    8556 kubeadm.go:319] 
	I1029 08:20:47.495743    8556 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1029 08:20:47.495749    8556 kubeadm.go:319] 
	I1029 08:20:47.495800    8556 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1029 08:20:47.495807    8556 kubeadm.go:319] 
	I1029 08:20:47.495858    8556 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1029 08:20:47.495958    8556 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1029 08:20:47.496049    8556 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1029 08:20:47.496056    8556 kubeadm.go:319] 
	I1029 08:20:47.496141    8556 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1029 08:20:47.496217    8556 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1029 08:20:47.496223    8556 kubeadm.go:319] 
	I1029 08:20:47.496303    8556 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token r5alvc.l4xv78fw0ie8bk9r \
	I1029 08:20:47.496393    8556 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac3e7481983f892dc0d8a54808eeb48169ef741e11f757d145550a40a55b8d23 \
	I1029 08:20:47.496416    8556 kubeadm.go:319] 	--control-plane 
	I1029 08:20:47.496422    8556 kubeadm.go:319] 
	I1029 08:20:47.496500    8556 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1029 08:20:47.496506    8556 kubeadm.go:319] 
	I1029 08:20:47.496577    8556 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token r5alvc.l4xv78fw0ie8bk9r \
	I1029 08:20:47.496686    8556 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac3e7481983f892dc0d8a54808eeb48169ef741e11f757d145550a40a55b8d23 
	I1029 08:20:47.496697    8556 cni.go:84] Creating CNI manager for ""
	I1029 08:20:47.496703    8556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:20:47.498936    8556 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1029 08:20:47.500306    8556 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1029 08:20:47.504854    8556 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1029 08:20:47.504874    8556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1029 08:20:47.518778    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1029 08:20:47.725177    8556 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 08:20:47.725312    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:47.725414    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-306574 minikube.k8s.io/updated_at=2025_10_29T08_20_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac minikube.k8s.io/name=addons-306574 minikube.k8s.io/primary=true
	I1029 08:20:47.798055    8556 ops.go:34] apiserver oom_adj: -16
	I1029 08:20:47.798063    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:48.298215    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:48.799094    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:49.298146    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:49.798185    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:50.298588    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:50.798514    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:51.298161    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:51.798889    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:52.298602    8556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:20:52.366811    8556 kubeadm.go:1114] duration metric: took 4.641537882s to wait for elevateKubeSystemPrivileges
	I1029 08:20:52.366841    8556 kubeadm.go:403] duration metric: took 15.010794887s to StartCluster
	I1029 08:20:52.366862    8556 settings.go:142] acquiring lock: {Name:mk07eebd81bddcab3dc3d429be8b09770a1732f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:52.367029    8556 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 08:20:52.368137    8556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:20:52.368441    8556 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:20:52.368763    8556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1029 08:20:52.368721    8556 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1029 08:20:52.368946    8556 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-306574"
	I1029 08:20:52.368986    8556 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-306574"
	I1029 08:20:52.369021    8556 addons.go:70] Setting cloud-spanner=true in profile "addons-306574"
	I1029 08:20:52.369036    8556 addons.go:239] Setting addon cloud-spanner=true in "addons-306574"
	I1029 08:20:52.369037    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.369081    8556 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-306574"
	I1029 08:20:52.369090    8556 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-306574"
	I1029 08:20:52.369122    8556 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-306574"
	I1029 08:20:52.369154    8556 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-306574"
	I1029 08:20:52.369134    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.369163    8556 addons.go:70] Setting yakd=true in profile "addons-306574"
	I1029 08:20:52.369176    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.369178    8556 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:20:52.369199    8556 addons.go:70] Setting default-storageclass=true in profile "addons-306574"
	I1029 08:20:52.369228    8556 addons.go:239] Setting addon yakd=true in "addons-306574"
	I1029 08:20:52.369238    8556 addons.go:70] Setting metrics-server=true in profile "addons-306574"
	I1029 08:20:52.369241    8556 addons.go:70] Setting gcp-auth=true in profile "addons-306574"
	I1029 08:20:52.369263    8556 addons.go:239] Setting addon metrics-server=true in "addons-306574"
	I1029 08:20:52.369273    8556 mustload.go:66] Loading cluster: addons-306574
	I1029 08:20:52.369300    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.369314    8556 addons.go:70] Setting storage-provisioner=true in profile "addons-306574"
	I1029 08:20:52.369335    8556 addons.go:239] Setting addon storage-provisioner=true in "addons-306574"
	I1029 08:20:52.369343    8556 addons.go:70] Setting volcano=true in profile "addons-306574"
	I1029 08:20:52.369362    8556 addons.go:239] Setting addon volcano=true in "addons-306574"
	I1029 08:20:52.369371    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.369384    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.369471    8556 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:20:52.369791    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.369930    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.369933    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.369943    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.369949    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.369968    8556 addons.go:70] Setting registry=true in profile "addons-306574"
	I1029 08:20:52.369982    8556 addons.go:239] Setting addon registry=true in "addons-306574"
	I1029 08:20:52.369301    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.370448    8556 addons.go:70] Setting registry-creds=true in profile "addons-306574"
	I1029 08:20:52.370466    8556 out.go:179] * Verifying Kubernetes components...
	I1029 08:20:52.370472    8556 addons.go:239] Setting addon registry-creds=true in "addons-306574"
	I1029 08:20:52.370510    8556 addons.go:70] Setting inspektor-gadget=true in profile "addons-306574"
	I1029 08:20:52.370526    8556 addons.go:239] Setting addon inspektor-gadget=true in "addons-306574"
	I1029 08:20:52.370555    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.370605    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.369231    8556 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-306574"
	I1029 08:20:52.370869    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.370923    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.371013    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.370477    8556 addons.go:70] Setting ingress=true in profile "addons-306574"
	I1029 08:20:52.372474    8556 addons.go:239] Setting addon ingress=true in "addons-306574"
	I1029 08:20:52.372558    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.369953    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.373115    8556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:20:52.373271    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.370453    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.373796    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.370489    8556 addons.go:70] Setting ingress-dns=true in profile "addons-306574"
	I1029 08:20:52.374823    8556 addons.go:239] Setting addon ingress-dns=true in "addons-306574"
	I1029 08:20:52.370500    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.372330    8556 addons.go:70] Setting volumesnapshots=true in profile "addons-306574"
	I1029 08:20:52.372346    8556 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-306574"
	I1029 08:20:52.369179    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.374985    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.375527    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.375725    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.375743    8556 addons.go:239] Setting addon volumesnapshots=true in "addons-306574"
	I1029 08:20:52.376326    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.375755    8556 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-306574"
	I1029 08:20:52.376288    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.387385    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.388986    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.417712    8556 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 08:20:52.419161    8556 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 08:20:52.419184    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 08:20:52.419248    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.424507    8556 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1029 08:20:52.432223    8556 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1029 08:20:52.432259    8556 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1029 08:20:52.432327    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.437120    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.443875    8556 addons.go:239] Setting addon default-storageclass=true in "addons-306574"
	I1029 08:20:52.446556    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.448619    8556 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1029 08:20:52.448793    8556 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1029 08:20:52.449414    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.449628    8556 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1029 08:20:52.449644    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1029 08:20:52.449692    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.453648    8556 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1029 08:20:52.453673    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1029 08:20:52.453739    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.464150    8556 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1029 08:20:52.465244    8556 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1029 08:20:52.465244    8556 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1029 08:20:52.466169    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1029 08:20:52.466258    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.470481    8556 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1029 08:20:52.470505    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1029 08:20:52.470586    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	W1029 08:20:52.472569    8556 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1029 08:20:52.481497    8556 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-306574"
	I1029 08:20:52.481564    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:20:52.482131    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:20:52.504035    8556 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1029 08:20:52.504152    8556 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1029 08:20:52.504180    8556 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1029 08:20:52.506568    8556 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1029 08:20:52.506592    8556 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1029 08:20:52.506661    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.506846    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1029 08:20:52.506916    8556 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1029 08:20:52.506927    8556 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1029 08:20:52.506976    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.507185    8556 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1029 08:20:52.507199    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1029 08:20:52.507244    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.508343    8556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1029 08:20:52.508362    8556 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1029 08:20:52.508414    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.515794    8556 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1029 08:20:52.515877    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1029 08:20:52.519224    8556 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1029 08:20:52.520489    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1029 08:20:52.521652    8556 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1029 08:20:52.521747    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1029 08:20:52.523674    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1029 08:20:52.523852    8556 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1029 08:20:52.523876    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1029 08:20:52.523945    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.526230    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1029 08:20:52.526523    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.531830    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1029 08:20:52.531838    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.535886    8556 out.go:179]   - Using image docker.io/registry:3.0.0
	I1029 08:20:52.536922    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1029 08:20:52.536971    8556 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1029 08:20:52.542187    8556 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1029 08:20:52.542289    8556 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1029 08:20:52.542300    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1029 08:20:52.542361    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.543362    8556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1029 08:20:52.543383    8556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1029 08:20:52.543459    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.545651    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.545717    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.554807    8556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1029 08:20:52.556590    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.557527    8556 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 08:20:52.557551    8556 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 08:20:52.557665    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.558779    8556 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1029 08:20:52.560029    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.568161    8556 out.go:179]   - Using image docker.io/busybox:stable
	I1029 08:20:52.570480    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.571195    8556 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1029 08:20:52.571217    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1029 08:20:52.571279    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:20:52.573098    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.578541    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.592748    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.595121    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.598974    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.602092    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.611277    8556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:20:52.625045    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:20:52.635401    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	W1029 08:20:52.641153    8556 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1029 08:20:52.641263    8556 retry.go:31] will retry after 283.064128ms: ssh: handshake failed: EOF
	I1029 08:20:52.736278    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1029 08:20:52.740304    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 08:20:52.752490    8556 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1029 08:20:52.752541    8556 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1029 08:20:52.757757    8556 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1029 08:20:52.757785    8556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1029 08:20:52.761085    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1029 08:20:52.779377    8556 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1029 08:20:52.779438    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1029 08:20:52.786818    8556 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1029 08:20:52.786859    8556 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1029 08:20:52.788466    8556 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1029 08:20:52.788508    8556 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1029 08:20:52.790059    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1029 08:20:52.796509    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1029 08:20:52.796538    8556 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1029 08:20:52.796553    8556 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1029 08:20:52.802963    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 08:20:52.802976    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1029 08:20:52.805653    8556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1029 08:20:52.805679    8556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1029 08:20:52.805919    8556 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:20:52.805942    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1029 08:20:52.821327    8556 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1029 08:20:52.821362    8556 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1029 08:20:52.827362    8556 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1029 08:20:52.827384    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1029 08:20:52.836135    8556 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1029 08:20:52.836236    8556 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1029 08:20:52.842197    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1029 08:20:52.842956    8556 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1029 08:20:52.842973    8556 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1029 08:20:52.856389    8556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1029 08:20:52.856413    8556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1029 08:20:52.868061    8556 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1029 08:20:52.868141    8556 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1029 08:20:52.868962    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:20:52.878437    8556 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1029 08:20:52.878542    8556 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1029 08:20:52.884271    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1029 08:20:52.901778    8556 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1029 08:20:52.901857    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1029 08:20:52.909381    8556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1029 08:20:52.909471    8556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1029 08:20:52.917745    8556 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1029 08:20:52.917769    8556 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1029 08:20:52.941732    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1029 08:20:52.978686    8556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1029 08:20:52.979065    8556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1029 08:20:52.983380    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1029 08:20:52.991718    8556 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1029 08:20:52.991798    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1029 08:20:53.029329    8556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1029 08:20:53.029355    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1029 08:20:53.030833    8556 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1029 08:20:53.032729    8556 node_ready.go:35] waiting up to 6m0s for node "addons-306574" to be "Ready" ...
	I1029 08:20:53.082101    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1029 08:20:53.122179    8556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1029 08:20:53.122213    8556 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1029 08:20:53.199344    8556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1029 08:20:53.199380    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1029 08:20:53.258286    8556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1029 08:20:53.258313    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1029 08:20:53.271229    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1029 08:20:53.281233    8556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1029 08:20:53.281321    8556 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1029 08:20:53.307571    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1029 08:20:53.536712    8556 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-306574" context rescaled to 1 replicas
	I1029 08:20:53.989071    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.25275202s)
	I1029 08:20:53.989122    8556 addons.go:480] Verifying addon ingress=true in "addons-306574"
	I1029 08:20:53.989119    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.248780728s)
	I1029 08:20:53.989241    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.199158787s)
	I1029 08:20:53.989214    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.228051694s)
	I1029 08:20:53.989319    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.192782858s)
	I1029 08:20:53.989399    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.186366573s)
	I1029 08:20:53.989416    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.186425983s)
	I1029 08:20:53.989448    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.147173088s)
	I1029 08:20:53.989543    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.120556026s)
	I1029 08:20:53.989575    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.10527748s)
	W1029 08:20:53.989580    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:53.989591    8556 addons.go:480] Verifying addon registry=true in "addons-306574"
	I1029 08:20:53.989597    8556 retry.go:31] will retry after 281.431723ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:53.989641    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.047802688s)
	I1029 08:20:53.989663    8556 addons.go:480] Verifying addon metrics-server=true in "addons-306574"
	I1029 08:20:53.989689    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.006271075s)
	I1029 08:20:53.990643    8556 out.go:179] * Verifying ingress addon...
	I1029 08:20:53.991473    8556 out.go:179] * Verifying registry addon...
	I1029 08:20:53.991487    8556 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-306574 service yakd-dashboard -n yakd-dashboard
	
	I1029 08:20:53.994200    8556 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1029 08:20:53.994209    8556 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1029 08:20:53.997159    8556 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1029 08:20:53.997254    8556 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1029 08:20:53.997271    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:54.271730    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:20:54.413835    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.331683892s)
	W1029 08:20:54.413888    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1029 08:20:54.413897    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.142635146s)
	I1029 08:20:54.413916    8556 retry.go:31] will retry after 223.011828ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1029 08:20:54.414178    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.106567738s)
	I1029 08:20:54.414209    8556 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-306574"
	I1029 08:20:54.416184    8556 out.go:179] * Verifying csi-hostpath-driver addon...
	I1029 08:20:54.418159    8556 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1029 08:20:54.420466    8556 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1029 08:20:54.420486    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:54.521686    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:54.521711    8556 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1029 08:20:54.521727    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:20:54.637314    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1029 08:20:54.902678    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:54.902714    8556 retry.go:31] will retry after 232.688774ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:54.921585    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:54.997110    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:54.997279    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:20:55.036017    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:20:55.135975    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:20:55.422233    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:55.522618    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:55.522827    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:20:55.922169    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:55.998072    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:55.998131    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:20:56.421169    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:56.521981    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:56.522198    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:20:56.921479    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:56.997457    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:56.997532    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:20:57.151706    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.514344936s)
	I1029 08:20:57.151793    8556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.015749959s)
	W1029 08:20:57.151828    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:57.151854    8556 retry.go:31] will retry after 397.704638ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:57.421254    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:57.521679    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:57.521931    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:20:57.535454    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:20:57.550710    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:20:57.921319    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:57.997284    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:57.997515    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:20:58.088058    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:58.088092    8556 retry.go:31] will retry after 632.016127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:58.422630    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:58.523181    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:58.523370    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:20:58.721037    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:20:58.921676    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:58.997295    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:58.997446    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:20:59.258509    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:59.258539    8556 retry.go:31] will retry after 1.52680531s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:20:59.421933    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:59.523011    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:59.523018    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:20:59.535540    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:20:59.921808    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:20:59.997516    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:20:59.997546    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:00.055187    8556 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1029 08:21:00.055254    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:21:00.073503    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:21:00.190419    8556 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1029 08:21:00.204449    8556 addons.go:239] Setting addon gcp-auth=true in "addons-306574"
	I1029 08:21:00.204500    8556 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:21:00.204851    8556 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:21:00.223226    8556 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1029 08:21:00.223289    8556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:21:00.242686    8556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:21:00.342194    8556 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1029 08:21:00.343553    8556 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1029 08:21:00.344668    8556 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1029 08:21:00.344688    8556 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1029 08:21:00.358655    8556 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1029 08:21:00.358680    8556 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1029 08:21:00.371732    8556 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1029 08:21:00.371752    8556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1029 08:21:00.384679    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1029 08:21:00.422089    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:00.497682    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:00.497843    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:00.697361    8556 addons.go:480] Verifying addon gcp-auth=true in "addons-306574"
	I1029 08:21:00.698633    8556 out.go:179] * Verifying gcp-auth addon...
	I1029 08:21:00.701222    8556 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1029 08:21:00.703778    8556 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1029 08:21:00.703795    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:00.786113    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:00.921778    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:00.997684    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:00.997770    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:01.204735    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1029 08:21:01.339832    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:01.339864    8556 retry.go:31] will retry after 2.504972298s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:01.421573    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:01.497160    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:01.497355    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:01.535971    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:01.704853    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:01.921958    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:01.997724    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:01.997892    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:02.203943    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:02.421734    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:02.497519    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:02.497711    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:02.704829    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:02.921855    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:02.997471    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:02.997666    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:03.203934    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:03.421805    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:03.497527    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:03.497707    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:03.536257    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:03.703769    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:03.846031    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:03.921825    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:03.997642    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:03.997759    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:04.205034    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1029 08:21:04.385861    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:04.385895    8556 retry.go:31] will retry after 3.240460661s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:04.421427    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:04.496978    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:04.497155    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:04.704392    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:04.921301    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:04.996802    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:04.996936    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:05.205313    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:05.421154    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:05.497816    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:05.497950    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:05.703827    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:05.922093    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:05.997645    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:05.997801    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:06.036443    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:06.204282    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:06.420928    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:06.497452    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:06.497655    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:06.704553    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:06.921242    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:06.996833    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:06.997092    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:07.204253    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:07.421408    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:07.497241    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:07.497297    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:07.627108    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:07.704395    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:07.921665    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:07.998217    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:07.998389    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:08.171982    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:08.172036    8556 retry.go:31] will retry after 5.626189077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:08.204932    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:08.421925    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:08.497324    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:08.497473    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:08.536098    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:08.704723    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:08.921737    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:08.997326    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:08.997483    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:09.204858    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:09.421526    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:09.497394    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:09.497456    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:09.704887    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:09.922212    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:09.997866    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:09.997981    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:10.203960    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:10.421803    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:10.497608    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:10.497633    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:10.704826    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:10.921499    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:10.997142    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:10.997327    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:11.035582    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:11.204224    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:11.420854    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:11.497449    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:11.497618    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:11.705090    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:11.922094    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:11.997788    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:11.997935    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:12.204206    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:12.421043    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:12.497877    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:12.498155    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:12.704065    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:12.921866    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:12.997568    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:12.997718    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:13.036123    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:13.205129    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:13.420719    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:13.497389    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:13.497520    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:13.704768    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:13.798966    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:13.922018    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:14.000317    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:14.000518    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:14.204074    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1029 08:21:14.341164    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:14.341196    8556 retry.go:31] will retry after 9.005876741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:14.420499    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:14.496889    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:14.497083    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:14.703823    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:14.921524    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:14.997296    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:14.997475    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:15.204113    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:15.421184    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:15.497820    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:15.498019    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:15.535368    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:15.703868    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:15.921683    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:15.997342    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:15.997490    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:16.204844    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:16.421529    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:16.497121    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:16.497294    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:16.704081    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:16.921722    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:16.997160    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:16.997277    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:17.204339    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:17.421211    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:17.497648    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:17.497920    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:17.536348    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:17.705222    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:17.921135    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:17.997830    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:17.997905    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:18.204151    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:18.420912    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:18.497418    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:18.497550    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:18.704676    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:18.921835    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:18.997450    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:18.997659    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:19.204705    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:19.421316    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:19.496761    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:19.496875    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:19.704341    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:19.921290    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:19.997081    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:19.997099    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:20.035387    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:20.204118    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:20.422031    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:20.497698    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:20.497808    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:20.704100    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:20.922048    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:20.997827    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:20.997906    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:21.204697    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:21.421544    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:21.497109    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:21.497298    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:21.704401    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:21.921234    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:21.998101    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:21.998259    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:22.035871    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:22.204647    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:22.421404    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:22.497634    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:22.497702    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:22.704152    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:22.920890    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:22.997612    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:22.997847    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:23.204737    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:23.347976    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:23.421312    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:23.497156    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:23.497211    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:23.704401    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1029 08:21:23.888864    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:23.888894    8556 retry.go:31] will retry after 11.978787272s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:23.921452    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:23.997413    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:23.997567    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:24.036153    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:24.205126    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:24.421936    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:24.497322    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:24.497484    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:24.704656    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:24.921600    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:24.997041    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:24.997216    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:25.204715    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:25.421566    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:25.497396    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:25.497514    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:25.704749    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:25.921620    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:25.997044    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:25.997096    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:26.204254    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:26.420713    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:26.497386    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:26.497627    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:26.536125    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:26.705007    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:26.921722    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:26.997674    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:26.997700    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:27.204264    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:27.421113    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:27.497570    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:27.497789    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:27.704856    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:27.921957    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:27.997710    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:27.997858    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:28.204300    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:28.420907    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:28.497646    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:28.497696    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:28.536583    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:28.704399    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:28.921139    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:28.998080    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:28.998084    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:29.204186    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:29.420860    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:29.497659    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:29.497707    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:29.704732    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:29.921793    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:29.997614    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:29.997755    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:30.204058    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:30.421926    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:30.497461    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:30.497623    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:30.704786    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:30.921708    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:30.997174    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:30.997226    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:31.035621    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:31.204458    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:31.421331    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:31.496758    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:31.497078    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:31.704192    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:31.920956    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:31.997697    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:31.997855    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:32.204484    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:32.421154    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:32.497045    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:32.497197    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:32.704843    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:32.921769    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:32.997424    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:32.997583    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:33.035972    8556 node_ready.go:57] node "addons-306574" has "Ready":"False" status (will retry)
	I1029 08:21:33.204675    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:33.421491    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:33.497225    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:33.497296    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:33.537550    8556 node_ready.go:49] node "addons-306574" is "Ready"
	I1029 08:21:33.537586    8556 node_ready.go:38] duration metric: took 40.504833878s for node "addons-306574" to be "Ready" ...
	I1029 08:21:33.537607    8556 api_server.go:52] waiting for apiserver process to appear ...
	I1029 08:21:33.537665    8556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:21:33.559200    8556 api_server.go:72] duration metric: took 41.19072057s to wait for apiserver process to appear ...
	I1029 08:21:33.559238    8556 api_server.go:88] waiting for apiserver healthz status ...
	I1029 08:21:33.559265    8556 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1029 08:21:33.564640    8556 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1029 08:21:33.565566    8556 api_server.go:141] control plane version: v1.34.1
	I1029 08:21:33.565599    8556 api_server.go:131] duration metric: took 6.35296ms to wait for apiserver health ...
	I1029 08:21:33.565610    8556 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 08:21:33.573845    8556 system_pods.go:59] 20 kube-system pods found
	I1029 08:21:33.573897    8556 system_pods.go:61] "amd-gpu-device-plugin-f4ngl" [ffb7bad7-9c62-431a-b7cc-47e06a813d29] Pending
	I1029 08:21:33.573906    8556 system_pods.go:61] "coredns-66bc5c9577-9jrct" [43803c1e-fe5e-43b9-9e0f-df62b0764904] Pending
	I1029 08:21:33.573911    8556 system_pods.go:61] "csi-hostpath-attacher-0" [80b28978-6d9b-44b3-ae61-e6d05d1fae29] Pending
	I1029 08:21:33.573916    8556 system_pods.go:61] "csi-hostpath-resizer-0" [e250b563-2767-4aa8-8de1-4cf2211c0238] Pending
	I1029 08:21:33.573921    8556 system_pods.go:61] "csi-hostpathplugin-jqbm2" [2f4f21f6-82ff-454b-9636-d9b80db3d007] Pending
	I1029 08:21:33.573926    8556 system_pods.go:61] "etcd-addons-306574" [2d22f0e7-4a8a-4b1f-bcfe-37a56ffaf97b] Running
	I1029 08:21:33.573931    8556 system_pods.go:61] "kindnet-nsf4w" [3b3cff7c-560b-4e94-befb-6d1a2d7ded72] Running
	I1029 08:21:33.573936    8556 system_pods.go:61] "kube-apiserver-addons-306574" [8644b6d0-b927-49dd-be65-b2a2282e5849] Running
	I1029 08:21:33.573941    8556 system_pods.go:61] "kube-controller-manager-addons-306574" [ff89d419-2134-4784-9737-e1bec24c6c08] Running
	I1029 08:21:33.573953    8556 system_pods.go:61] "kube-ingress-dns-minikube" [dc5542d8-6a31-4125-b723-12c2c3526b2d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:21:33.573958    8556 system_pods.go:61] "kube-proxy-6gp9v" [cd5d46dd-530d-4538-8525-bd6a713f5446] Running
	I1029 08:21:33.573965    8556 system_pods.go:61] "kube-scheduler-addons-306574" [0542d0b2-295a-4228-b4c3-18abd5038bb8] Running
	I1029 08:21:33.573969    8556 system_pods.go:61] "metrics-server-85b7d694d7-nsm7j" [d5e58e21-27a8-443a-87dd-b092fa4d1169] Pending
	I1029 08:21:33.573973    8556 system_pods.go:61] "nvidia-device-plugin-daemonset-fm5xc" [06902152-4c44-414b-afca-bd97070f4a44] Pending
	I1029 08:21:33.573981    8556 system_pods.go:61] "registry-6b586f9694-782gg" [d6b59cbc-13f3-4137-ada6-66822061f960] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:21:33.573986    8556 system_pods.go:61] "registry-creds-764b6fb674-s8s7q" [1041cb13-458e-46e5-8f69-a740c85ba5df] Pending
	I1029 08:21:33.574009    8556 system_pods.go:61] "registry-proxy-b9mf9" [73f9106c-8bd1-4a4c-9389-08df4ebf334e] Pending
	I1029 08:21:33.574017    8556 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2phmj" [6760de76-3472-48ff-a420-e4e8e8f1036d] Pending
	I1029 08:21:33.574025    8556 system_pods.go:61] "snapshot-controller-7d9fbc56b8-v4lqk" [916ac34d-f7fd-4dba-b606-7b2908081c16] Pending
	I1029 08:21:33.574033    8556 system_pods.go:61] "storage-provisioner" [46b80bc7-2bf0-4a9e-a163-fadabac69f7b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:21:33.574043    8556 system_pods.go:74] duration metric: took 8.423698ms to wait for pod list to return data ...
	I1029 08:21:33.574055    8556 default_sa.go:34] waiting for default service account to be created ...
	I1029 08:21:33.576533    8556 default_sa.go:45] found service account: "default"
	I1029 08:21:33.576561    8556 default_sa.go:55] duration metric: took 2.498343ms for default service account to be created ...
	I1029 08:21:33.576573    8556 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 08:21:33.582822    8556 system_pods.go:86] 20 kube-system pods found
	I1029 08:21:33.582856    8556 system_pods.go:89] "amd-gpu-device-plugin-f4ngl" [ffb7bad7-9c62-431a-b7cc-47e06a813d29] Pending
	I1029 08:21:33.582867    8556 system_pods.go:89] "coredns-66bc5c9577-9jrct" [43803c1e-fe5e-43b9-9e0f-df62b0764904] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:21:33.582874    8556 system_pods.go:89] "csi-hostpath-attacher-0" [80b28978-6d9b-44b3-ae61-e6d05d1fae29] Pending
	I1029 08:21:33.582881    8556 system_pods.go:89] "csi-hostpath-resizer-0" [e250b563-2767-4aa8-8de1-4cf2211c0238] Pending
	I1029 08:21:33.582886    8556 system_pods.go:89] "csi-hostpathplugin-jqbm2" [2f4f21f6-82ff-454b-9636-d9b80db3d007] Pending
	I1029 08:21:33.582890    8556 system_pods.go:89] "etcd-addons-306574" [2d22f0e7-4a8a-4b1f-bcfe-37a56ffaf97b] Running
	I1029 08:21:33.582897    8556 system_pods.go:89] "kindnet-nsf4w" [3b3cff7c-560b-4e94-befb-6d1a2d7ded72] Running
	I1029 08:21:33.582903    8556 system_pods.go:89] "kube-apiserver-addons-306574" [8644b6d0-b927-49dd-be65-b2a2282e5849] Running
	I1029 08:21:33.582909    8556 system_pods.go:89] "kube-controller-manager-addons-306574" [ff89d419-2134-4784-9737-e1bec24c6c08] Running
	I1029 08:21:33.582923    8556 system_pods.go:89] "kube-ingress-dns-minikube" [dc5542d8-6a31-4125-b723-12c2c3526b2d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:21:33.582932    8556 system_pods.go:89] "kube-proxy-6gp9v" [cd5d46dd-530d-4538-8525-bd6a713f5446] Running
	I1029 08:21:33.582940    8556 system_pods.go:89] "kube-scheduler-addons-306574" [0542d0b2-295a-4228-b4c3-18abd5038bb8] Running
	I1029 08:21:33.582951    8556 system_pods.go:89] "metrics-server-85b7d694d7-nsm7j" [d5e58e21-27a8-443a-87dd-b092fa4d1169] Pending
	I1029 08:21:33.582960    8556 system_pods.go:89] "nvidia-device-plugin-daemonset-fm5xc" [06902152-4c44-414b-afca-bd97070f4a44] Pending
	I1029 08:21:33.582968    8556 system_pods.go:89] "registry-6b586f9694-782gg" [d6b59cbc-13f3-4137-ada6-66822061f960] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:21:33.582977    8556 system_pods.go:89] "registry-creds-764b6fb674-s8s7q" [1041cb13-458e-46e5-8f69-a740c85ba5df] Pending
	I1029 08:21:33.582984    8556 system_pods.go:89] "registry-proxy-b9mf9" [73f9106c-8bd1-4a4c-9389-08df4ebf334e] Pending
	I1029 08:21:33.583003    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2phmj" [6760de76-3472-48ff-a420-e4e8e8f1036d] Pending
	I1029 08:21:33.583010    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v4lqk" [916ac34d-f7fd-4dba-b606-7b2908081c16] Pending
	I1029 08:21:33.583018    8556 system_pods.go:89] "storage-provisioner" [46b80bc7-2bf0-4a9e-a163-fadabac69f7b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:21:33.583036    8556 retry.go:31] will retry after 262.672298ms: missing components: kube-dns
	I1029 08:21:33.704875    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:33.852060    8556 system_pods.go:86] 20 kube-system pods found
	I1029 08:21:33.852112    8556 system_pods.go:89] "amd-gpu-device-plugin-f4ngl" [ffb7bad7-9c62-431a-b7cc-47e06a813d29] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1029 08:21:33.852123    8556 system_pods.go:89] "coredns-66bc5c9577-9jrct" [43803c1e-fe5e-43b9-9e0f-df62b0764904] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:21:33.852133    8556 system_pods.go:89] "csi-hostpath-attacher-0" [80b28978-6d9b-44b3-ae61-e6d05d1fae29] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:21:33.852144    8556 system_pods.go:89] "csi-hostpath-resizer-0" [e250b563-2767-4aa8-8de1-4cf2211c0238] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:21:33.852227    8556 system_pods.go:89] "csi-hostpathplugin-jqbm2" [2f4f21f6-82ff-454b-9636-d9b80db3d007] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:21:33.852261    8556 system_pods.go:89] "etcd-addons-306574" [2d22f0e7-4a8a-4b1f-bcfe-37a56ffaf97b] Running
	I1029 08:21:33.852272    8556 system_pods.go:89] "kindnet-nsf4w" [3b3cff7c-560b-4e94-befb-6d1a2d7ded72] Running
	I1029 08:21:33.852277    8556 system_pods.go:89] "kube-apiserver-addons-306574" [8644b6d0-b927-49dd-be65-b2a2282e5849] Running
	I1029 08:21:33.852283    8556 system_pods.go:89] "kube-controller-manager-addons-306574" [ff89d419-2134-4784-9737-e1bec24c6c08] Running
	I1029 08:21:33.852324    8556 system_pods.go:89] "kube-ingress-dns-minikube" [dc5542d8-6a31-4125-b723-12c2c3526b2d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:21:33.852352    8556 system_pods.go:89] "kube-proxy-6gp9v" [cd5d46dd-530d-4538-8525-bd6a713f5446] Running
	I1029 08:21:33.852359    8556 system_pods.go:89] "kube-scheduler-addons-306574" [0542d0b2-295a-4228-b4c3-18abd5038bb8] Running
	I1029 08:21:33.852392    8556 system_pods.go:89] "metrics-server-85b7d694d7-nsm7j" [d5e58e21-27a8-443a-87dd-b092fa4d1169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:21:33.852403    8556 system_pods.go:89] "nvidia-device-plugin-daemonset-fm5xc" [06902152-4c44-414b-afca-bd97070f4a44] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:21:33.852415    8556 system_pods.go:89] "registry-6b586f9694-782gg" [d6b59cbc-13f3-4137-ada6-66822061f960] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:21:33.852427    8556 system_pods.go:89] "registry-creds-764b6fb674-s8s7q" [1041cb13-458e-46e5-8f69-a740c85ba5df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:21:33.852435    8556 system_pods.go:89] "registry-proxy-b9mf9" [73f9106c-8bd1-4a4c-9389-08df4ebf334e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:21:33.852444    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2phmj" [6760de76-3472-48ff-a420-e4e8e8f1036d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:21:33.852484    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v4lqk" [916ac34d-f7fd-4dba-b606-7b2908081c16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:21:33.852500    8556 system_pods.go:89] "storage-provisioner" [46b80bc7-2bf0-4a9e-a163-fadabac69f7b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:21:33.852528    8556 retry.go:31] will retry after 258.882234ms: missing components: kube-dns
	I1029 08:21:33.950827    8556 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1029 08:21:33.950854    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:34.051215    8556 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1029 08:21:34.051239    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:34.051279    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:34.116041    8556 system_pods.go:86] 20 kube-system pods found
	I1029 08:21:34.116073    8556 system_pods.go:89] "amd-gpu-device-plugin-f4ngl" [ffb7bad7-9c62-431a-b7cc-47e06a813d29] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1029 08:21:34.116080    8556 system_pods.go:89] "coredns-66bc5c9577-9jrct" [43803c1e-fe5e-43b9-9e0f-df62b0764904] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:21:34.116088    8556 system_pods.go:89] "csi-hostpath-attacher-0" [80b28978-6d9b-44b3-ae61-e6d05d1fae29] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:21:34.116100    8556 system_pods.go:89] "csi-hostpath-resizer-0" [e250b563-2767-4aa8-8de1-4cf2211c0238] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:21:34.116105    8556 system_pods.go:89] "csi-hostpathplugin-jqbm2" [2f4f21f6-82ff-454b-9636-d9b80db3d007] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:21:34.116110    8556 system_pods.go:89] "etcd-addons-306574" [2d22f0e7-4a8a-4b1f-bcfe-37a56ffaf97b] Running
	I1029 08:21:34.116115    8556 system_pods.go:89] "kindnet-nsf4w" [3b3cff7c-560b-4e94-befb-6d1a2d7ded72] Running
	I1029 08:21:34.116118    8556 system_pods.go:89] "kube-apiserver-addons-306574" [8644b6d0-b927-49dd-be65-b2a2282e5849] Running
	I1029 08:21:34.116122    8556 system_pods.go:89] "kube-controller-manager-addons-306574" [ff89d419-2134-4784-9737-e1bec24c6c08] Running
	I1029 08:21:34.116127    8556 system_pods.go:89] "kube-ingress-dns-minikube" [dc5542d8-6a31-4125-b723-12c2c3526b2d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:21:34.116130    8556 system_pods.go:89] "kube-proxy-6gp9v" [cd5d46dd-530d-4538-8525-bd6a713f5446] Running
	I1029 08:21:34.116134    8556 system_pods.go:89] "kube-scheduler-addons-306574" [0542d0b2-295a-4228-b4c3-18abd5038bb8] Running
	I1029 08:21:34.116138    8556 system_pods.go:89] "metrics-server-85b7d694d7-nsm7j" [d5e58e21-27a8-443a-87dd-b092fa4d1169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:21:34.116144    8556 system_pods.go:89] "nvidia-device-plugin-daemonset-fm5xc" [06902152-4c44-414b-afca-bd97070f4a44] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:21:34.116150    8556 system_pods.go:89] "registry-6b586f9694-782gg" [d6b59cbc-13f3-4137-ada6-66822061f960] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:21:34.116155    8556 system_pods.go:89] "registry-creds-764b6fb674-s8s7q" [1041cb13-458e-46e5-8f69-a740c85ba5df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:21:34.116159    8556 system_pods.go:89] "registry-proxy-b9mf9" [73f9106c-8bd1-4a4c-9389-08df4ebf334e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:21:34.116167    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2phmj" [6760de76-3472-48ff-a420-e4e8e8f1036d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:21:34.116172    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v4lqk" [916ac34d-f7fd-4dba-b606-7b2908081c16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:21:34.116177    8556 system_pods.go:89] "storage-provisioner" [46b80bc7-2bf0-4a9e-a163-fadabac69f7b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:21:34.116193    8556 retry.go:31] will retry after 486.917132ms: missing components: kube-dns
	I1029 08:21:34.204214    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:34.421812    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:34.498862    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:34.499813    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:34.608079    8556 system_pods.go:86] 20 kube-system pods found
	I1029 08:21:34.608118    8556 system_pods.go:89] "amd-gpu-device-plugin-f4ngl" [ffb7bad7-9c62-431a-b7cc-47e06a813d29] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1029 08:21:34.608129    8556 system_pods.go:89] "coredns-66bc5c9577-9jrct" [43803c1e-fe5e-43b9-9e0f-df62b0764904] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:21:34.608141    8556 system_pods.go:89] "csi-hostpath-attacher-0" [80b28978-6d9b-44b3-ae61-e6d05d1fae29] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:21:34.608149    8556 system_pods.go:89] "csi-hostpath-resizer-0" [e250b563-2767-4aa8-8de1-4cf2211c0238] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:21:34.608161    8556 system_pods.go:89] "csi-hostpathplugin-jqbm2" [2f4f21f6-82ff-454b-9636-d9b80db3d007] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:21:34.608171    8556 system_pods.go:89] "etcd-addons-306574" [2d22f0e7-4a8a-4b1f-bcfe-37a56ffaf97b] Running
	I1029 08:21:34.608181    8556 system_pods.go:89] "kindnet-nsf4w" [3b3cff7c-560b-4e94-befb-6d1a2d7ded72] Running
	I1029 08:21:34.608190    8556 system_pods.go:89] "kube-apiserver-addons-306574" [8644b6d0-b927-49dd-be65-b2a2282e5849] Running
	I1029 08:21:34.608196    8556 system_pods.go:89] "kube-controller-manager-addons-306574" [ff89d419-2134-4784-9737-e1bec24c6c08] Running
	I1029 08:21:34.608207    8556 system_pods.go:89] "kube-ingress-dns-minikube" [dc5542d8-6a31-4125-b723-12c2c3526b2d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:21:34.608215    8556 system_pods.go:89] "kube-proxy-6gp9v" [cd5d46dd-530d-4538-8525-bd6a713f5446] Running
	I1029 08:21:34.608221    8556 system_pods.go:89] "kube-scheduler-addons-306574" [0542d0b2-295a-4228-b4c3-18abd5038bb8] Running
	I1029 08:21:34.608231    8556 system_pods.go:89] "metrics-server-85b7d694d7-nsm7j" [d5e58e21-27a8-443a-87dd-b092fa4d1169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:21:34.608242    8556 system_pods.go:89] "nvidia-device-plugin-daemonset-fm5xc" [06902152-4c44-414b-afca-bd97070f4a44] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:21:34.608254    8556 system_pods.go:89] "registry-6b586f9694-782gg" [d6b59cbc-13f3-4137-ada6-66822061f960] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:21:34.608266    8556 system_pods.go:89] "registry-creds-764b6fb674-s8s7q" [1041cb13-458e-46e5-8f69-a740c85ba5df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:21:34.608278    8556 system_pods.go:89] "registry-proxy-b9mf9" [73f9106c-8bd1-4a4c-9389-08df4ebf334e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:21:34.608291    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2phmj" [6760de76-3472-48ff-a420-e4e8e8f1036d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:21:34.608308    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v4lqk" [916ac34d-f7fd-4dba-b606-7b2908081c16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:21:34.608318    8556 system_pods.go:89] "storage-provisioner" [46b80bc7-2bf0-4a9e-a163-fadabac69f7b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:21:34.608338    8556 retry.go:31] will retry after 435.141221ms: missing components: kube-dns
	I1029 08:21:34.704069    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:34.922521    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:34.997928    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:34.998092    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:35.051415    8556 system_pods.go:86] 20 kube-system pods found
	I1029 08:21:35.051470    8556 system_pods.go:89] "amd-gpu-device-plugin-f4ngl" [ffb7bad7-9c62-431a-b7cc-47e06a813d29] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1029 08:21:35.051481    8556 system_pods.go:89] "coredns-66bc5c9577-9jrct" [43803c1e-fe5e-43b9-9e0f-df62b0764904] Running
	I1029 08:21:35.051494    8556 system_pods.go:89] "csi-hostpath-attacher-0" [80b28978-6d9b-44b3-ae61-e6d05d1fae29] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1029 08:21:35.051503    8556 system_pods.go:89] "csi-hostpath-resizer-0" [e250b563-2767-4aa8-8de1-4cf2211c0238] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1029 08:21:35.051513    8556 system_pods.go:89] "csi-hostpathplugin-jqbm2" [2f4f21f6-82ff-454b-9636-d9b80db3d007] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1029 08:21:35.051521    8556 system_pods.go:89] "etcd-addons-306574" [2d22f0e7-4a8a-4b1f-bcfe-37a56ffaf97b] Running
	I1029 08:21:35.051528    8556 system_pods.go:89] "kindnet-nsf4w" [3b3cff7c-560b-4e94-befb-6d1a2d7ded72] Running
	I1029 08:21:35.051534    8556 system_pods.go:89] "kube-apiserver-addons-306574" [8644b6d0-b927-49dd-be65-b2a2282e5849] Running
	I1029 08:21:35.051541    8556 system_pods.go:89] "kube-controller-manager-addons-306574" [ff89d419-2134-4784-9737-e1bec24c6c08] Running
	I1029 08:21:35.051550    8556 system_pods.go:89] "kube-ingress-dns-minikube" [dc5542d8-6a31-4125-b723-12c2c3526b2d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:21:35.051556    8556 system_pods.go:89] "kube-proxy-6gp9v" [cd5d46dd-530d-4538-8525-bd6a713f5446] Running
	I1029 08:21:35.051562    8556 system_pods.go:89] "kube-scheduler-addons-306574" [0542d0b2-295a-4228-b4c3-18abd5038bb8] Running
	I1029 08:21:35.051570    8556 system_pods.go:89] "metrics-server-85b7d694d7-nsm7j" [d5e58e21-27a8-443a-87dd-b092fa4d1169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:21:35.051579    8556 system_pods.go:89] "nvidia-device-plugin-daemonset-fm5xc" [06902152-4c44-414b-afca-bd97070f4a44] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:21:35.051589    8556 system_pods.go:89] "registry-6b586f9694-782gg" [d6b59cbc-13f3-4137-ada6-66822061f960] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:21:35.051597    8556 system_pods.go:89] "registry-creds-764b6fb674-s8s7q" [1041cb13-458e-46e5-8f69-a740c85ba5df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:21:35.051606    8556 system_pods.go:89] "registry-proxy-b9mf9" [73f9106c-8bd1-4a4c-9389-08df4ebf334e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:21:35.051616    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2phmj" [6760de76-3472-48ff-a420-e4e8e8f1036d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:21:35.051633    8556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v4lqk" [916ac34d-f7fd-4dba-b606-7b2908081c16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:21:35.051640    8556 system_pods.go:89] "storage-provisioner" [46b80bc7-2bf0-4a9e-a163-fadabac69f7b] Running
	I1029 08:21:35.051658    8556 system_pods.go:126] duration metric: took 1.475077811s to wait for k8s-apps to be running ...
	I1029 08:21:35.051670    8556 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 08:21:35.051724    8556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:21:35.071246    8556 system_svc.go:56] duration metric: took 19.56463ms WaitForService to wait for kubelet
	I1029 08:21:35.071282    8556 kubeadm.go:587] duration metric: took 42.702809602s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:21:35.071307    8556 node_conditions.go:102] verifying NodePressure condition ...
	I1029 08:21:35.074954    8556 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 08:21:35.075015    8556 node_conditions.go:123] node cpu capacity is 8
	I1029 08:21:35.075034    8556 node_conditions.go:105] duration metric: took 3.721417ms to run NodePressure ...
	I1029 08:21:35.075051    8556 start.go:242] waiting for startup goroutines ...
	I1029 08:21:35.205097    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:35.422632    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:35.497591    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:35.497762    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:35.704532    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:35.868698    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:35.922226    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:35.998217    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:35.998328    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:36.204744    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:36.422784    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:36.522869    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:36.523005    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:36.524730    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:36.524759    8556 retry.go:31] will retry after 8.489429665s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:36.704707    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:36.921984    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:36.997917    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:36.998153    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:37.207683    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:37.423270    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:37.499173    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:37.499415    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:37.704675    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:37.922489    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:37.997974    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:37.997985    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:38.205567    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:38.422120    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:38.498167    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:38.498187    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:38.705132    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:38.921768    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:38.997742    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:38.997961    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:39.204341    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:39.422391    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:39.498054    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:39.498324    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:39.705090    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:39.921674    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:39.998137    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:39.998408    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:40.205620    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:40.421874    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:40.497546    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:40.497700    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:40.705116    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:40.922802    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:40.997746    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:40.997783    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:41.274085    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:41.422417    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:41.523108    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:41.523112    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:41.705051    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:41.922461    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:41.998292    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:41.998379    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:42.205525    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:42.422141    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:42.497926    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:42.497968    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:42.705193    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:42.922410    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:43.023042    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:43.023269    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:43.204557    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:43.421462    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:43.497360    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:43.497382    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:43.704408    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:43.922450    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:43.998097    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:43.998932    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:44.205161    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:44.421478    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:44.497350    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:44.497585    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:44.704664    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:44.922472    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:44.997205    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:44.997242    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:45.014319    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:21:45.204919    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:45.422041    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:45.497350    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:45.497389    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:21:45.689892    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:45.689924    8556 retry.go:31] will retry after 14.552494066s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:21:45.704631    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:45.922358    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:45.998499    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:45.998525    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:46.204733    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:46.422178    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:46.523523    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:46.523660    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:46.704327    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:46.921185    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:46.998058    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:46.998093    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:47.204429    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:47.422264    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:47.498356    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:47.498382    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:47.704070    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:47.922823    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:47.998286    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:47.998375    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:48.204633    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:48.422108    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:48.497868    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:48.497908    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:48.704942    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:48.921957    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:48.997958    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:48.998366    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:49.205113    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:49.422772    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:49.497841    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:49.497898    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:49.705077    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:49.921456    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:49.997528    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:49.997614    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:50.205240    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:50.421403    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:50.498176    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:50.498324    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:50.704627    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:50.922441    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:50.998127    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:50.998193    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:51.204466    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:51.421869    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:51.497617    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:51.497766    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:51.704964    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:51.921681    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:52.024233    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:52.024424    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:52.205176    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:52.421667    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:52.497919    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:52.497980    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:52.704733    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:52.922155    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:52.998148    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:52.998382    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:53.205448    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:53.421924    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:53.497725    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:53.497775    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:53.704747    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:53.921897    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:53.998317    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:53.998342    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:54.204181    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:54.421218    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:54.498431    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:54.498477    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:54.703938    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:54.922680    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:54.997515    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:54.997601    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:55.204878    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:55.424417    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:55.497659    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:55.497705    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:55.704198    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:55.978599    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:55.997558    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:55.997629    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:56.214448    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:56.422187    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:56.497825    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:56.498018    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:56.704913    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:56.921475    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:56.997253    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:56.997272    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:57.203661    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:57.422284    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:57.498160    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:57.498228    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:57.704136    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:57.921421    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:57.997870    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:57.998011    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:58.204785    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:58.422302    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:58.497707    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:21:58.497797    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:58.704730    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:58.921940    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:58.998513    8556 kapi.go:107] duration metric: took 1m5.004309642s to wait for kubernetes.io/minikube-addons=registry ...
	I1029 08:21:58.998697    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:59.204752    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:59.422048    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:59.497708    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:21:59.704126    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:21:59.921462    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:21:59.997965    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:00.204846    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:00.242931    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:00.423687    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:00.500117    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:00.704157    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:00.923402    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:01.024077    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:22:01.148012    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:01.148043    8556 retry.go:31] will retry after 34.092481654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:01.204934    8556 kapi.go:107] duration metric: took 1m0.503711628s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1029 08:22:01.207092    8556 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-306574 cluster.
	I1029 08:22:01.208217    8556 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1029 08:22:01.209247    8556 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1029 08:22:01.421985    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:01.497743    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:01.922213    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:01.998138    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:02.421491    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:02.497703    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:02.921899    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:02.997430    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:03.421827    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:03.497760    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:03.922305    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:04.000661    8556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:04.421732    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:04.497633    8556 kapi.go:107] duration metric: took 1m10.503421639s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1029 08:22:04.922252    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:05.421366    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:05.921709    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:06.422085    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:06.921434    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:07.421497    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:07.922304    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:08.422337    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:08.921639    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:09.422280    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:09.921466    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:10.421676    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:10.922421    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:11.421542    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:11.922366    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:12.422498    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:12.922094    8556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:13.421408    8556 kapi.go:107] duration metric: took 1m19.003249808s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1029 08:22:35.241334    8556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1029 08:22:35.782789    8556 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 08:22:35.782881    8556 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1029 08:22:35.784629    8556 out.go:179] * Enabled addons: storage-provisioner, nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, registry-creds, cloud-spanner, metrics-server, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1029 08:22:35.785925    8556 addons.go:515] duration metric: took 1m43.417207717s for enable addons: enabled=[storage-provisioner nvidia-device-plugin amd-gpu-device-plugin ingress-dns registry-creds cloud-spanner metrics-server yakd default-storageclass storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1029 08:22:35.785964    8556 start.go:247] waiting for cluster config update ...
	I1029 08:22:35.785984    8556 start.go:256] writing updated cluster config ...
	I1029 08:22:35.786258    8556 ssh_runner.go:195] Run: rm -f paused
	I1029 08:22:35.790143    8556 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 08:22:35.793571    8556 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9jrct" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:35.797938    8556 pod_ready.go:94] pod "coredns-66bc5c9577-9jrct" is "Ready"
	I1029 08:22:35.797962    8556 pod_ready.go:86] duration metric: took 4.37412ms for pod "coredns-66bc5c9577-9jrct" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:35.799936    8556 pod_ready.go:83] waiting for pod "etcd-addons-306574" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:35.803938    8556 pod_ready.go:94] pod "etcd-addons-306574" is "Ready"
	I1029 08:22:35.803962    8556 pod_ready.go:86] duration metric: took 4.002476ms for pod "etcd-addons-306574" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:35.805834    8556 pod_ready.go:83] waiting for pod "kube-apiserver-addons-306574" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:35.809513    8556 pod_ready.go:94] pod "kube-apiserver-addons-306574" is "Ready"
	I1029 08:22:35.809536    8556 pod_ready.go:86] duration metric: took 3.677568ms for pod "kube-apiserver-addons-306574" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:35.811431    8556 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-306574" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:36.194693    8556 pod_ready.go:94] pod "kube-controller-manager-addons-306574" is "Ready"
	I1029 08:22:36.194731    8556 pod_ready.go:86] duration metric: took 383.260397ms for pod "kube-controller-manager-addons-306574" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:36.393780    8556 pod_ready.go:83] waiting for pod "kube-proxy-6gp9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:36.793912    8556 pod_ready.go:94] pod "kube-proxy-6gp9v" is "Ready"
	I1029 08:22:36.793941    8556 pod_ready.go:86] duration metric: took 400.1364ms for pod "kube-proxy-6gp9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:36.994449    8556 pod_ready.go:83] waiting for pod "kube-scheduler-addons-306574" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:37.394151    8556 pod_ready.go:94] pod "kube-scheduler-addons-306574" is "Ready"
	I1029 08:22:37.394177    8556 pod_ready.go:86] duration metric: took 399.695054ms for pod "kube-scheduler-addons-306574" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:22:37.394188    8556 pod_ready.go:40] duration metric: took 1.60402213s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 08:22:37.438940    8556 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 08:22:37.441759    8556 out.go:179] * Done! kubectl is now configured to use "addons-306574" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 08:22:38 addons-306574 crio[777]: time="2025-10-29T08:22:38.30806416Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 29 08:22:39 addons-306574 crio[777]: time="2025-10-29T08:22:39.019894969Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=150a30ec-7a54-4109-9824-0869584a202d name=/runtime.v1.ImageService/PullImage
	Oct 29 08:22:39 addons-306574 crio[777]: time="2025-10-29T08:22:39.020496786Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cc376a44-92d8-4516-952a-3e4d29d62cf8 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:22:39 addons-306574 crio[777]: time="2025-10-29T08:22:39.021827997Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=951730fa-2be1-4d47-b643-35281289d842 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 08:22:39 addons-306574 crio[777]: time="2025-10-29T08:22:39.025416494Z" level=info msg="Creating container: default/busybox/busybox" id=d22a2c9c-4d4f-4149-8064-dcf929d71ce0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 08:22:39 addons-306574 crio[777]: time="2025-10-29T08:22:39.025529181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:22:39 addons-306574 crio[777]: time="2025-10-29T08:22:39.031073006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:22:39 addons-306574 crio[777]: time="2025-10-29T08:22:39.03150265Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 08:22:39 addons-306574 crio[777]: time="2025-10-29T08:22:39.062208806Z" level=info msg="Created container 80233b40a107019d3f029b8e37ae9ce90bb30ba13a00ab6b72a15f32bcc77d95: default/busybox/busybox" id=d22a2c9c-4d4f-4149-8064-dcf929d71ce0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 08:22:39 addons-306574 crio[777]: time="2025-10-29T08:22:39.06289468Z" level=info msg="Starting container: 80233b40a107019d3f029b8e37ae9ce90bb30ba13a00ab6b72a15f32bcc77d95" id=55c5b87e-09d5-4d1f-a601-b68ca6ae1a9c name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 08:22:39 addons-306574 crio[777]: time="2025-10-29T08:22:39.064939998Z" level=info msg="Started container" PID=6571 containerID=80233b40a107019d3f029b8e37ae9ce90bb30ba13a00ab6b72a15f32bcc77d95 description=default/busybox/busybox id=55c5b87e-09d5-4d1f-a601-b68ca6ae1a9c name=/runtime.v1.RuntimeService/StartContainer sandboxID=952c556445d286250a534a4ad9ed0f9b944d42c74a9630e01e09c8368a5075fa
	Oct 29 08:22:46 addons-306574 crio[777]: time="2025-10-29T08:22:46.693664676Z" level=info msg="Removing container: 3a26bb00478fd4172b3cfe13a0bfc1c630337759537f3ee89d6d6600a1c10c6d" id=f20f53b3-d64a-4597-b96b-7a6008104c45 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 08:22:46 addons-306574 crio[777]: time="2025-10-29T08:22:46.700864133Z" level=info msg="Removed container 3a26bb00478fd4172b3cfe13a0bfc1c630337759537f3ee89d6d6600a1c10c6d: gcp-auth/gcp-auth-certs-patch-m8865/patch" id=f20f53b3-d64a-4597-b96b-7a6008104c45 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 08:22:46 addons-306574 crio[777]: time="2025-10-29T08:22:46.702518695Z" level=info msg="Removing container: 114e42177000dbc812661113a58a57052fa9e0b263ec63d6b5dbc8ee90dd9703" id=e8360af9-339b-477d-911a-49792d8c9dbb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 08:22:46 addons-306574 crio[777]: time="2025-10-29T08:22:46.710063546Z" level=info msg="Removed container 114e42177000dbc812661113a58a57052fa9e0b263ec63d6b5dbc8ee90dd9703: gcp-auth/gcp-auth-certs-create-zfc9f/create" id=e8360af9-339b-477d-911a-49792d8c9dbb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 08:22:46 addons-306574 crio[777]: time="2025-10-29T08:22:46.713339286Z" level=info msg="Stopping pod sandbox: 2ff2b6526ea036398b5442b205f4df8231db5bd43564c90187009fde630254e8" id=7888a728-416f-4e8c-b6a2-6cc8d5395998 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 29 08:22:46 addons-306574 crio[777]: time="2025-10-29T08:22:46.713398064Z" level=info msg="Stopped pod sandbox (already stopped): 2ff2b6526ea036398b5442b205f4df8231db5bd43564c90187009fde630254e8" id=7888a728-416f-4e8c-b6a2-6cc8d5395998 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 29 08:22:46 addons-306574 crio[777]: time="2025-10-29T08:22:46.713808786Z" level=info msg="Removing pod sandbox: 2ff2b6526ea036398b5442b205f4df8231db5bd43564c90187009fde630254e8" id=3f794254-161e-471e-9219-66826ed22a02 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 29 08:22:46 addons-306574 crio[777]: time="2025-10-29T08:22:46.717353365Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 29 08:22:46 addons-306574 crio[777]: time="2025-10-29T08:22:46.717436242Z" level=info msg="Removed pod sandbox: 2ff2b6526ea036398b5442b205f4df8231db5bd43564c90187009fde630254e8" id=3f794254-161e-471e-9219-66826ed22a02 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 29 08:22:46 addons-306574 crio[777]: time="2025-10-29T08:22:46.717940776Z" level=info msg="Stopping pod sandbox: fc9f0be5810c3fcb70bbcf895f97623695e3b8043ea98ba1462571d0d3d5da26" id=29806f24-5097-4b65-9c02-c44c16eb2f03 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 29 08:22:46 addons-306574 crio[777]: time="2025-10-29T08:22:46.717980441Z" level=info msg="Stopped pod sandbox (already stopped): fc9f0be5810c3fcb70bbcf895f97623695e3b8043ea98ba1462571d0d3d5da26" id=29806f24-5097-4b65-9c02-c44c16eb2f03 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 29 08:22:46 addons-306574 crio[777]: time="2025-10-29T08:22:46.718330998Z" level=info msg="Removing pod sandbox: fc9f0be5810c3fcb70bbcf895f97623695e3b8043ea98ba1462571d0d3d5da26" id=6a0f49c5-9369-42b7-805b-bf96942d5b7e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 29 08:22:46 addons-306574 crio[777]: time="2025-10-29T08:22:46.721670244Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 29 08:22:46 addons-306574 crio[777]: time="2025-10-29T08:22:46.721747292Z" level=info msg="Removed pod sandbox: fc9f0be5810c3fcb70bbcf895f97623695e3b8043ea98ba1462571d0d3d5da26" id=6a0f49c5-9369-42b7-805b-bf96942d5b7e name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	80233b40a1070       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   952c556445d28       busybox                                     default
	d99432f91672d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          34 seconds ago       Running             csi-snapshotter                          0                   0371783fc6b8f       csi-hostpathplugin-jqbm2                    kube-system
	9b8691b1023f8       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          35 seconds ago       Running             csi-provisioner                          0                   0371783fc6b8f       csi-hostpathplugin-jqbm2                    kube-system
	e573e53bb23e0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            36 seconds ago       Running             liveness-probe                           0                   0371783fc6b8f       csi-hostpathplugin-jqbm2                    kube-system
	e8d80d0af78a6       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           36 seconds ago       Running             hostpath                                 0                   0371783fc6b8f       csi-hostpathplugin-jqbm2                    kube-system
	2db472537b2f6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            37 seconds ago       Running             gadget                                   0                   1495b3f498d32       gadget-k5cgq                                gadget
	2e36d72e127f6       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                42 seconds ago       Running             node-driver-registrar                    0                   0371783fc6b8f       csi-hostpathplugin-jqbm2                    kube-system
	5e8c5db172939       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             42 seconds ago       Running             controller                               0                   c65f2d2f92b6f       ingress-nginx-controller-675c5ddd98-slzzq   ingress-nginx
	baf6f3b37987b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 46 seconds ago       Running             gcp-auth                                 0                   e037f9417ff82       gcp-auth-78565c9fb4-psjtf                   gcp-auth
	0c6b816415341       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              48 seconds ago       Running             registry-proxy                           0                   e9d65f493d6ea       registry-proxy-b9mf9                        kube-system
	65705ac3c758b       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     49 seconds ago       Running             nvidia-device-plugin-ctr                 0                   6e264716c8898       nvidia-device-plugin-daemonset-fm5xc        kube-system
	1fe5195ca5ae1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   52 seconds ago       Running             csi-external-health-monitor-controller   0                   0371783fc6b8f       csi-hostpathplugin-jqbm2                    kube-system
	5371c6256fe4e       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     53 seconds ago       Running             amd-gpu-device-plugin                    0                   c0c74f57ce43c       amd-gpu-device-plugin-f4ngl                 kube-system
	9b0733b1c46f1       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        54 seconds ago       Running             metrics-server                           0                   6198fa49cd369       metrics-server-85b7d694d7-nsm7j             kube-system
	197632c3e4940       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              55 seconds ago       Running             csi-resizer                              0                   b65dcde210b42       csi-hostpath-resizer-0                      kube-system
	59926386ecab3       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             57 seconds ago       Running             csi-attacher                             0                   386f1d0a8770b       csi-hostpath-attacher-0                     kube-system
	570224acd5072       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      58 seconds ago       Running             volume-snapshot-controller               0                   6ae14b02c8958       snapshot-controller-7d9fbc56b8-v4lqk        kube-system
	6d99cf55a4a67       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             58 seconds ago       Running             local-path-provisioner                   0                   961cc63c584a0       local-path-provisioner-648f6765c9-whpv4     local-path-storage
	217f45f262a57       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      59 seconds ago       Running             volume-snapshot-controller               0                   9b4c386fc4a26       snapshot-controller-7d9fbc56b8-2phmj        kube-system
	f20d80bdb5eda       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              patch                                    0                   abf9ae23a16bc       ingress-nginx-admission-patch-fgbht         ingress-nginx
	f05088385cca1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              create                                   0                   6875e255270d5       ingress-nginx-admission-create-5tdvz        ingress-nginx
	1e97c2959256c       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   1ec8189deaa7b       yakd-dashboard-5ff678cb9-njrr5              yakd-dashboard
	6a10f82f1439a       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   5379380c31f12       registry-6b586f9694-782gg                   kube-system
	ff1e52067a5c8       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   76e8dd4438712       kube-ingress-dns-minikube                   kube-system
	9514f177e9812       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               About a minute ago   Running             cloud-spanner-emulator                   0                   6b4e7daf9b454       cloud-spanner-emulator-86bd5cbb97-wrt96     default
	ea0d3827c6799       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   23a72ec104987       coredns-66bc5c9577-9jrct                    kube-system
	11ad0d3d51574       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   1b02725b41bea       storage-provisioner                         kube-system
	2df32f1d553fc       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   1c528c6626acf       kube-proxy-6gp9v                            kube-system
	a41a72a7acd69       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   132dcf3f412f8       kindnet-nsf4w                               kube-system
	90b2a91e7069c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   d560b0f9607d6       kube-controller-manager-addons-306574       kube-system
	56022b3e8de6c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   65edbdb46e570       kube-scheduler-addons-306574                kube-system
	a2eacbffa27c9       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   5b80e2af7adc2       kube-apiserver-addons-306574                kube-system
	49643bd1cddf5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   96c2ab8ce679a       etcd-addons-306574                          kube-system
	
	
	==> coredns [ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db] <==
	[INFO] 10.244.0.17:48114 - 61721 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003868634s
	[INFO] 10.244.0.17:42503 - 49888 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000091322s
	[INFO] 10.244.0.17:42503 - 49573 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000147778s
	[INFO] 10.244.0.17:32902 - 1408 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000068911s
	[INFO] 10.244.0.17:32902 - 1062 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000049326s
	[INFO] 10.244.0.17:34363 - 31440 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000053634s
	[INFO] 10.244.0.17:34363 - 31679 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000080995s
	[INFO] 10.244.0.17:44540 - 23607 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000115917s
	[INFO] 10.244.0.17:44540 - 23444 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010148s
	[INFO] 10.244.0.21:57957 - 42134 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000205497s
	[INFO] 10.244.0.21:46496 - 44522 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000271141s
	[INFO] 10.244.0.21:56851 - 20683 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000112299s
	[INFO] 10.244.0.21:56016 - 64989 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124045s
	[INFO] 10.244.0.21:34359 - 18155 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109461s
	[INFO] 10.244.0.21:53177 - 28089 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000167915s
	[INFO] 10.244.0.21:47190 - 25668 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003045453s
	[INFO] 10.244.0.21:34263 - 52328 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003109642s
	[INFO] 10.244.0.21:48014 - 33709 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005554553s
	[INFO] 10.244.0.21:43116 - 54475 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.006925587s
	[INFO] 10.244.0.21:54978 - 5250 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005304996s
	[INFO] 10.244.0.21:49781 - 32271 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006143363s
	[INFO] 10.244.0.21:54756 - 59227 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004428992s
	[INFO] 10.244.0.21:45790 - 60969 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00513392s
	[INFO] 10.244.0.21:34083 - 40780 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001159231s
	[INFO] 10.244.0.21:56546 - 37991 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002058078s
	
	
	==> describe nodes <==
	Name:               addons-306574
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-306574
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=addons-306574
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T08_20_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-306574
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-306574"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:20:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-306574
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:22:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 08:22:18 +0000   Wed, 29 Oct 2025 08:20:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 08:22:18 +0000   Wed, 29 Oct 2025 08:20:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 08:22:18 +0000   Wed, 29 Oct 2025 08:20:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 08:22:18 +0000   Wed, 29 Oct 2025 08:21:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-306574
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                a1650f9e-db84-4c08-b5e9-8c3a81f4f882
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-86bd5cbb97-wrt96      0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  gadget                      gadget-k5cgq                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  gcp-auth                    gcp-auth-78565c9fb4-psjtf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-slzzq    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         112s
	  kube-system                 amd-gpu-device-plugin-f4ngl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 coredns-66bc5c9577-9jrct                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 csi-hostpathplugin-jqbm2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 etcd-addons-306574                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-nsf4w                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-addons-306574                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-addons-306574        200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-6gp9v                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-addons-306574                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 metrics-server-85b7d694d7-nsm7j              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         113s
	  kube-system                 nvidia-device-plugin-daemonset-fm5xc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 registry-6b586f9694-782gg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 registry-creds-764b6fb674-s8s7q              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 registry-proxy-b9mf9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 snapshot-controller-7d9fbc56b8-2phmj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 snapshot-controller-7d9fbc56b8-v4lqk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  local-path-storage          local-path-provisioner-648f6765c9-whpv4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-njrr5               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     113s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 113s  kube-proxy       
	  Normal  Starting                 2m    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m    kubelet          Node addons-306574 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m    kubelet          Node addons-306574 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m    kubelet          Node addons-306574 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           115s  node-controller  Node addons-306574 event: Registered Node addons-306574 in Controller
	  Normal  NodeReady                73s   kubelet          Node addons-306574 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct29 08:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001814] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087023] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.404793] i8042: Warning: Keylock active
	[  +0.010405] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.536694] block sda: the capability attribute has been deprecated.
	[  +0.101648] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029373] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.989088] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd] <==
	{"level":"warn","ts":"2025-10-29T08:20:43.761466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.768051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.774023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.780084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.786340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.793035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.799593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.812183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.818882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.825720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.838520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.846281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.855030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:43.896837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:54.899505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:20:54.906316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:21.309877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:21.316913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:21.329726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:21:21.336174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38068","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-29T08:21:41.147494Z","caller":"traceutil/trace.go:172","msg":"trace[1669655330] linearizableReadLoop","detail":"{readStateIndex:987; appliedIndex:987; }","duration":"140.839596ms","start":"2025-10-29T08:21:41.006634Z","end":"2025-10-29T08:21:41.147474Z","steps":["trace[1669655330] 'read index received'  (duration: 140.832801ms)","trace[1669655330] 'applied index is now lower than readState.Index'  (duration: 6.009µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-29T08:21:41.272739Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"266.024603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-29T08:21:41.272805Z","caller":"traceutil/trace.go:172","msg":"trace[1749734003] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:964; }","duration":"266.163223ms","start":"2025-10-29T08:21:41.006626Z","end":"2025-10-29T08:21:41.272789Z","steps":["trace[1749734003] 'agreement among raft nodes before linearized reading'  (duration: 140.955453ms)","trace[1749734003] 'range keys from in-memory index tree'  (duration: 125.035233ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-29T08:21:41.272802Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.196327ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040954387074543 > lease_revoke:<id:70cc9a2f0e118c48>","response":"size:29"}
	{"level":"info","ts":"2025-10-29T08:22:02.121887Z","caller":"traceutil/trace.go:172","msg":"trace[411747514] transaction","detail":"{read_only:false; response_revision:1141; number_of_response:1; }","duration":"118.9753ms","start":"2025-10-29T08:22:02.002890Z","end":"2025-10-29T08:22:02.121866Z","steps":["trace[411747514] 'process raft request'  (duration: 107.556469ms)","trace[411747514] 'compare'  (duration: 11.333407ms)"],"step_count":2}
	
	
	==> gcp-auth [baf6f3b37987bfaa110b830d8c52aba3e7d09991703da7372971a14f5c58efef] <==
	2025/10/29 08:22:00 GCP Auth Webhook started!
	2025/10/29 08:22:37 Ready to marshal response ...
	2025/10/29 08:22:37 Ready to write response ...
	2025/10/29 08:22:37 Ready to marshal response ...
	2025/10/29 08:22:37 Ready to write response ...
	2025/10/29 08:22:38 Ready to marshal response ...
	2025/10/29 08:22:38 Ready to write response ...
	
	
	==> kernel <==
	 08:22:47 up 5 min,  0 user,  load average: 1.44, 1.03, 0.42
	Linux addons-306574 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110] <==
	I1029 08:20:53.228826       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 08:20:53.229241       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 08:21:23.228154       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1029 08:21:23.229437       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1029 08:21:23.229468       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 08:21:23.230037       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1029 08:21:24.729302       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 08:21:24.729332       1 metrics.go:72] Registering metrics
	I1029 08:21:24.729377       1 controller.go:711] "Syncing nftables rules"
	I1029 08:21:33.231074       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:21:33.231128       1 main.go:301] handling current node
	I1029 08:21:43.227728       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:21:43.227785       1 main.go:301] handling current node
	I1029 08:21:53.227092       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:21:53.227126       1 main.go:301] handling current node
	I1029 08:22:03.228074       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:22:03.228108       1 main.go:301] handling current node
	I1029 08:22:13.227080       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:22:13.227123       1 main.go:301] handling current node
	I1029 08:22:23.227704       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:22:23.227732       1 main.go:301] handling current node
	I1029 08:22:33.228712       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:22:33.228745       1 main.go:301] handling current node
	I1029 08:22:43.230092       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:22:43.230122       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1] <==
	W1029 08:20:54.906311       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1029 08:21:00.637621       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.104.2.28"}
	W1029 08:21:21.309768       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1029 08:21:21.316860       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1029 08:21:21.329651       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1029 08:21:21.336106       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1029 08:21:33.511678       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.2.28:443: connect: connection refused
	E1029 08:21:33.511720       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.2.28:443: connect: connection refused" logger="UnhandledError"
	W1029 08:21:33.511726       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.2.28:443: connect: connection refused
	E1029 08:21:33.511753       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.2.28:443: connect: connection refused" logger="UnhandledError"
	W1029 08:21:33.532970       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.2.28:443: connect: connection refused
	E1029 08:21:33.533218       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.2.28:443: connect: connection refused" logger="UnhandledError"
	W1029 08:21:33.536109       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.2.28:443: connect: connection refused
	E1029 08:21:33.536144       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.2.28:443: connect: connection refused" logger="UnhandledError"
	W1029 08:21:52.974618       1 handler_proxy.go:99] no RequestInfo found in the context
	E1029 08:21:52.974693       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1029 08:21:52.974696       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.42.12:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.42.12:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.42.12:443: connect: connection refused" logger="UnhandledError"
	E1029 08:21:52.976315       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.42.12:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.42.12:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.42.12:443: connect: connection refused" logger="UnhandledError"
	E1029 08:21:52.982055       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.42.12:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.42.12:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.42.12:443: connect: connection refused" logger="UnhandledError"
	I1029 08:21:53.034657       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1029 08:22:45.131047       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39020: use of closed network connection
	E1029 08:22:45.281097       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39044: use of closed network connection
	
	
	==> kube-controller-manager [90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67] <==
	I1029 08:20:51.290215       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 08:20:51.290215       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 08:20:51.290316       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 08:20:51.290464       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 08:20:51.290560       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 08:20:51.290576       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 08:20:51.290798       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 08:20:51.291151       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 08:20:51.291264       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 08:20:51.293174       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 08:20:51.295349       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 08:20:51.299527       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 08:20:51.302766       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1029 08:20:51.309051       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1029 08:20:53.702046       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1029 08:21:21.304142       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1029 08:21:21.304265       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1029 08:21:21.304309       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1029 08:21:21.317841       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1029 08:21:21.323946       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1029 08:21:21.405349       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 08:21:21.424548       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 08:21:36.248091       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1029 08:21:51.411158       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1029 08:21:51.433364       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326] <==
	I1029 08:20:52.753094       1 server_linux.go:53] "Using iptables proxy"
	I1029 08:20:52.989968       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 08:20:53.090232       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 08:20:53.093088       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1029 08:20:53.093978       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 08:20:53.295808       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 08:20:53.308793       1 server_linux.go:132] "Using iptables Proxier"
	I1029 08:20:53.469652       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 08:20:53.485405       1 server.go:527] "Version info" version="v1.34.1"
	I1029 08:20:53.485542       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:20:53.488047       1 config.go:200] "Starting service config controller"
	I1029 08:20:53.488126       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 08:20:53.488184       1 config.go:106] "Starting endpoint slice config controller"
	I1029 08:20:53.488210       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 08:20:53.488314       1 config.go:309] "Starting node config controller"
	I1029 08:20:53.488348       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 08:20:53.488373       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 08:20:53.488688       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 08:20:53.488758       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 08:20:53.589406       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 08:20:53.589472       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 08:20:53.593100       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3] <==
	E1029 08:20:44.314716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 08:20:44.314773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 08:20:44.314829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 08:20:44.314883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 08:20:44.315214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 08:20:44.315364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 08:20:44.315428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 08:20:44.315544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 08:20:44.315615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 08:20:44.315686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 08:20:44.315709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 08:20:44.315738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 08:20:44.315615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 08:20:44.315829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 08:20:44.315844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 08:20:45.174042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 08:20:45.217845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 08:20:45.240104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 08:20:45.282210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 08:20:45.282215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 08:20:45.394980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 08:20:45.482212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 08:20:45.486264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 08:20:45.511641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1029 08:20:45.911223       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 08:21:59 addons-306574 kubelet[1303]: I1029 08:21:59.969182    1303 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-b9mf9" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:22:00 addons-306574 kubelet[1303]: I1029 08:22:00.990358    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-psjtf" podStartSLOduration=50.784957641 podStartE2EDuration="1m0.990336929s" podCreationTimestamp="2025-10-29 08:21:00 +0000 UTC" firstStartedPulling="2025-10-29 08:21:49.725740888 +0000 UTC m=+63.109358770" lastFinishedPulling="2025-10-29 08:21:59.931120165 +0000 UTC m=+73.314738058" observedRunningTime="2025-10-29 08:22:00.989163108 +0000 UTC m=+74.372781009" watchObservedRunningTime="2025-10-29 08:22:00.990336929 +0000 UTC m=+74.373954829"
	Oct 29 08:22:01 addons-306574 kubelet[1303]: I1029 08:22:01.705416    1303 scope.go:117] "RemoveContainer" containerID="ff2b1a7338dd86114cdd4cc6f64a4ab55afed1d756ef642f1d19325ad2cac76c"
	Oct 29 08:22:03 addons-306574 kubelet[1303]: I1029 08:22:03.987793    1303 scope.go:117] "RemoveContainer" containerID="ff2b1a7338dd86114cdd4cc6f64a4ab55afed1d756ef642f1d19325ad2cac76c"
	Oct 29 08:22:04 addons-306574 kubelet[1303]: I1029 08:22:04.012839    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-slzzq" podStartSLOduration=55.884060516 podStartE2EDuration="1m10.012819568s" podCreationTimestamp="2025-10-29 08:20:54 +0000 UTC" firstStartedPulling="2025-10-29 08:21:49.740378892 +0000 UTC m=+63.123996771" lastFinishedPulling="2025-10-29 08:22:03.869137942 +0000 UTC m=+77.252755823" observedRunningTime="2025-10-29 08:22:04.012588196 +0000 UTC m=+77.396206093" watchObservedRunningTime="2025-10-29 08:22:04.012819568 +0000 UTC m=+77.396437467"
	Oct 29 08:22:05 addons-306574 kubelet[1303]: I1029 08:22:05.028295    1303 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6s6b\" (UniqueName: \"kubernetes.io/projected/b836c24e-77b1-401e-a5f7-35956d33903a-kube-api-access-n6s6b\") pod \"b836c24e-77b1-401e-a5f7-35956d33903a\" (UID: \"b836c24e-77b1-401e-a5f7-35956d33903a\") "
	Oct 29 08:22:05 addons-306574 kubelet[1303]: I1029 08:22:05.030526    1303 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b836c24e-77b1-401e-a5f7-35956d33903a-kube-api-access-n6s6b" (OuterVolumeSpecName: "kube-api-access-n6s6b") pod "b836c24e-77b1-401e-a5f7-35956d33903a" (UID: "b836c24e-77b1-401e-a5f7-35956d33903a"). InnerVolumeSpecName "kube-api-access-n6s6b". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 29 08:22:05 addons-306574 kubelet[1303]: I1029 08:22:05.129219    1303 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n6s6b\" (UniqueName: \"kubernetes.io/projected/b836c24e-77b1-401e-a5f7-35956d33903a-kube-api-access-n6s6b\") on node \"addons-306574\" DevicePath \"\""
	Oct 29 08:22:05 addons-306574 kubelet[1303]: E1029 08:22:05.431733    1303 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 29 08:22:05 addons-306574 kubelet[1303]: E1029 08:22:05.431810    1303 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1041cb13-458e-46e5-8f69-a740c85ba5df-gcr-creds podName:1041cb13-458e-46e5-8f69-a740c85ba5df nodeName:}" failed. No retries permitted until 2025-10-29 08:22:37.431797652 +0000 UTC m=+110.815415542 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/1041cb13-458e-46e5-8f69-a740c85ba5df-gcr-creds") pod "registry-creds-764b6fb674-s8s7q" (UID: "1041cb13-458e-46e5-8f69-a740c85ba5df") : secret "registry-creds-gcr" not found
	Oct 29 08:22:06 addons-306574 kubelet[1303]: I1029 08:22:06.005162    1303 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc9f0be5810c3fcb70bbcf895f97623695e3b8043ea98ba1462571d0d3d5da26"
	Oct 29 08:22:10 addons-306574 kubelet[1303]: I1029 08:22:10.033188    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-k5cgq" podStartSLOduration=66.154977522 podStartE2EDuration="1m17.033168487s" podCreationTimestamp="2025-10-29 08:20:53 +0000 UTC" firstStartedPulling="2025-10-29 08:21:58.220312623 +0000 UTC m=+71.603930504" lastFinishedPulling="2025-10-29 08:22:09.098503568 +0000 UTC m=+82.482121469" observedRunningTime="2025-10-29 08:22:10.032498812 +0000 UTC m=+83.416116711" watchObservedRunningTime="2025-10-29 08:22:10.033168487 +0000 UTC m=+83.416786386"
	Oct 29 08:22:10 addons-306574 kubelet[1303]: I1029 08:22:10.762683    1303 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 29 08:22:10 addons-306574 kubelet[1303]: I1029 08:22:10.762726    1303 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 29 08:22:13 addons-306574 kubelet[1303]: I1029 08:22:13.062004    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-jqbm2" podStartSLOduration=1.718257266 podStartE2EDuration="40.061966419s" podCreationTimestamp="2025-10-29 08:21:33 +0000 UTC" firstStartedPulling="2025-10-29 08:21:33.970031532 +0000 UTC m=+47.353649426" lastFinishedPulling="2025-10-29 08:22:12.313740701 +0000 UTC m=+85.697358579" observedRunningTime="2025-10-29 08:22:13.060501646 +0000 UTC m=+86.444119551" watchObservedRunningTime="2025-10-29 08:22:13.061966419 +0000 UTC m=+86.445584321"
	Oct 29 08:22:22 addons-306574 kubelet[1303]: I1029 08:22:22.707787    1303 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19ec7a66-92d9-4061-9d21-7b1634931610" path="/var/lib/kubelet/pods/19ec7a66-92d9-4061-9d21-7b1634931610/volumes"
	Oct 29 08:22:36 addons-306574 kubelet[1303]: I1029 08:22:36.708088    1303 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b836c24e-77b1-401e-a5f7-35956d33903a" path="/var/lib/kubelet/pods/b836c24e-77b1-401e-a5f7-35956d33903a/volumes"
	Oct 29 08:22:37 addons-306574 kubelet[1303]: E1029 08:22:37.486226    1303 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 29 08:22:37 addons-306574 kubelet[1303]: E1029 08:22:37.486319    1303 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1041cb13-458e-46e5-8f69-a740c85ba5df-gcr-creds podName:1041cb13-458e-46e5-8f69-a740c85ba5df nodeName:}" failed. No retries permitted until 2025-10-29 08:23:41.48630555 +0000 UTC m=+174.869923440 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/1041cb13-458e-46e5-8f69-a740c85ba5df-gcr-creds") pod "registry-creds-764b6fb674-s8s7q" (UID: "1041cb13-458e-46e5-8f69-a740c85ba5df") : secret "registry-creds-gcr" not found
	Oct 29 08:22:38 addons-306574 kubelet[1303]: I1029 08:22:38.089979    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1e2d92de-7eba-4d7c-b287-b5b5f0ea39a2-gcp-creds\") pod \"busybox\" (UID: \"1e2d92de-7eba-4d7c-b287-b5b5f0ea39a2\") " pod="default/busybox"
	Oct 29 08:22:38 addons-306574 kubelet[1303]: I1029 08:22:38.090099    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twls6\" (UniqueName: \"kubernetes.io/projected/1e2d92de-7eba-4d7c-b287-b5b5f0ea39a2-kube-api-access-twls6\") pod \"busybox\" (UID: \"1e2d92de-7eba-4d7c-b287-b5b5f0ea39a2\") " pod="default/busybox"
	Oct 29 08:22:39 addons-306574 kubelet[1303]: I1029 08:22:39.152513    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.437251308 podStartE2EDuration="2.15249268s" podCreationTimestamp="2025-10-29 08:22:37 +0000 UTC" firstStartedPulling="2025-10-29 08:22:38.306077308 +0000 UTC m=+111.689695192" lastFinishedPulling="2025-10-29 08:22:39.021318685 +0000 UTC m=+112.404936564" observedRunningTime="2025-10-29 08:22:39.152085012 +0000 UTC m=+112.535702910" watchObservedRunningTime="2025-10-29 08:22:39.15249268 +0000 UTC m=+112.536110579"
	Oct 29 08:22:45 addons-306574 kubelet[1303]: E1029 08:22:45.130951    1303 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42390->127.0.0.1:45207: write tcp 127.0.0.1:42390->127.0.0.1:45207: write: broken pipe
	Oct 29 08:22:46 addons-306574 kubelet[1303]: I1029 08:22:46.692436    1303 scope.go:117] "RemoveContainer" containerID="3a26bb00478fd4172b3cfe13a0bfc1c630337759537f3ee89d6d6600a1c10c6d"
	Oct 29 08:22:46 addons-306574 kubelet[1303]: I1029 08:22:46.701145    1303 scope.go:117] "RemoveContainer" containerID="114e42177000dbc812661113a58a57052fa9e0b263ec63d6b5dbc8ee90dd9703"
	
	
	==> storage-provisioner [11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3] <==
	W1029 08:22:22.454542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:24.458231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:24.463583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:26.467221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:26.471232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:28.474247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:28.479141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:30.481530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:30.485116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:32.488181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:32.493394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:34.497102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:34.500986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:36.503794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:36.507608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:38.510837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:38.515083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:40.518090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:40.521926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:42.524837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:42.530172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:44.533648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:44.537906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:46.540709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:22:46.546025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-306574 -n addons-306574
helpers_test.go:269: (dbg) Run:  kubectl --context addons-306574 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-5tdvz ingress-nginx-admission-patch-fgbht registry-creds-764b6fb674-s8s7q
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-306574 describe pod ingress-nginx-admission-create-5tdvz ingress-nginx-admission-patch-fgbht registry-creds-764b6fb674-s8s7q
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-306574 describe pod ingress-nginx-admission-create-5tdvz ingress-nginx-admission-patch-fgbht registry-creds-764b6fb674-s8s7q: exit status 1 (60.181754ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5tdvz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fgbht" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-s8s7q" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-306574 describe pod ingress-nginx-admission-create-5tdvz ingress-nginx-admission-patch-fgbht registry-creds-764b6fb674-s8s7q: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306574 addons disable headlamp --alsologtostderr -v=1: exit status 11 (250.912567ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:22:47.958952   18286 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:22:47.959265   18286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:47.959276   18286 out.go:374] Setting ErrFile to fd 2...
	I1029 08:22:47.959280   18286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:47.959464   18286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:22:47.959730   18286 mustload.go:66] Loading cluster: addons-306574
	I1029 08:22:47.960045   18286 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:47.960058   18286 addons.go:607] checking whether the cluster is paused
	I1029 08:22:47.960139   18286 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:47.960154   18286 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:22:47.960532   18286 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:22:47.978727   18286 ssh_runner.go:195] Run: systemctl --version
	I1029 08:22:47.978790   18286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:22:47.997157   18286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:22:48.099749   18286 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:22:48.099837   18286 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:22:48.130217   18286 cri.go:89] found id: "d99432f91672d8c1d47f840f261c261a51f13955e2e017761d646d32c313587e"
	I1029 08:22:48.130235   18286 cri.go:89] found id: "9b8691b1023f8ec0251c9d8bc6b0b7d0fd5fea3ab2e89770788c518fb6ca60da"
	I1029 08:22:48.130239   18286 cri.go:89] found id: "e573e53bb23e05e993a3f287fd82b620791eb65b7ea5b1a15f165ae62002efa6"
	I1029 08:22:48.130242   18286 cri.go:89] found id: "e8d80d0af78a66b1efd4b4234f27af0b707a55e687da42d583a0ccac1dbc984f"
	I1029 08:22:48.130245   18286 cri.go:89] found id: "2e36d72e127f635b90fd8546ab02d09ce6d8f7f6b4a95e36f8f801a783fecf9d"
	I1029 08:22:48.130248   18286 cri.go:89] found id: "0c6b81641534122ae540f448d067dbd84061141d3143381b755c3d7f7ca5a3ab"
	I1029 08:22:48.130250   18286 cri.go:89] found id: "65705ac3c758b666b1f55a5f8f6d8e4d57b32abe7b2d77235aee8212d4b5d20f"
	I1029 08:22:48.130253   18286 cri.go:89] found id: "1fe5195ca5ae1f93eaf51b4e0526a47a8bb3bfa8e424a07fb66a6a9f81e65eb9"
	I1029 08:22:48.130255   18286 cri.go:89] found id: "5371c6256fe4ea45e90d7c3585d7bb2374128bb029b65fdcf3fdd6c3cb545117"
	I1029 08:22:48.130267   18286 cri.go:89] found id: "9b0733b1c46f199694daba309c64792dced18ab0f2b83341c9bb8a464c908faa"
	I1029 08:22:48.130272   18286 cri.go:89] found id: "197632c3e4940a86b33c2118188a5166f3fe06ff6190898cb1f2df1fbe028d75"
	I1029 08:22:48.130276   18286 cri.go:89] found id: "59926386ecab3854c03a360c1116f1d37b0a73222f47ac804c45f9d2d657b8e4"
	I1029 08:22:48.130280   18286 cri.go:89] found id: "570224acd5072d8b3a0600cf1fcca347b6bbac8f4c411782d6c8460dce94d302"
	I1029 08:22:48.130283   18286 cri.go:89] found id: "217f45f262a57c7196f21057bc6190abc33ede848212337a47b452ce93dae999"
	I1029 08:22:48.130287   18286 cri.go:89] found id: "6a10f82f1439a50fe59f05dd964f098ffac531c82cea62036993200facf7e0e6"
	I1029 08:22:48.130293   18286 cri.go:89] found id: "ff1e52067a5c8509cce80abc4079064ad91518967401e8a972d0957857b94894"
	I1029 08:22:48.130302   18286 cri.go:89] found id: "ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db"
	I1029 08:22:48.130311   18286 cri.go:89] found id: "11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3"
	I1029 08:22:48.130313   18286 cri.go:89] found id: "2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326"
	I1029 08:22:48.130316   18286 cri.go:89] found id: "a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110"
	I1029 08:22:48.130319   18286 cri.go:89] found id: "90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67"
	I1029 08:22:48.130321   18286 cri.go:89] found id: "56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3"
	I1029 08:22:48.130323   18286 cri.go:89] found id: "a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1"
	I1029 08:22:48.130326   18286 cri.go:89] found id: "49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd"
	I1029 08:22:48.130328   18286 cri.go:89] found id: ""
	I1029 08:22:48.130370   18286 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:22:48.144682   18286 out.go:203] 
	W1029 08:22:48.146241   18286 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:22:48.146273   18286 out.go:285] * 
	* 
	W1029 08:22:48.149277   18286 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:22:48.150600   18286 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-306574 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-wrt96" [b49a4760-821a-4f53-b19f-f934a5bad5ea] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003007227s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306574 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (260.156201ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:23:06.504358   20550 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:23:06.504653   20550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:06.504668   20550 out.go:374] Setting ErrFile to fd 2...
	I1029 08:23:06.504673   20550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:06.504914   20550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:23:06.505227   20550 mustload.go:66] Loading cluster: addons-306574
	I1029 08:23:06.505617   20550 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:06.505638   20550 addons.go:607] checking whether the cluster is paused
	I1029 08:23:06.505747   20550 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:06.505770   20550 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:23:06.506307   20550 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:23:06.527564   20550 ssh_runner.go:195] Run: systemctl --version
	I1029 08:23:06.527636   20550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:23:06.546980   20550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:23:06.647883   20550 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:23:06.647948   20550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:23:06.677786   20550 cri.go:89] found id: "d99432f91672d8c1d47f840f261c261a51f13955e2e017761d646d32c313587e"
	I1029 08:23:06.677808   20550 cri.go:89] found id: "9b8691b1023f8ec0251c9d8bc6b0b7d0fd5fea3ab2e89770788c518fb6ca60da"
	I1029 08:23:06.677812   20550 cri.go:89] found id: "e573e53bb23e05e993a3f287fd82b620791eb65b7ea5b1a15f165ae62002efa6"
	I1029 08:23:06.677818   20550 cri.go:89] found id: "e8d80d0af78a66b1efd4b4234f27af0b707a55e687da42d583a0ccac1dbc984f"
	I1029 08:23:06.677821   20550 cri.go:89] found id: "2e36d72e127f635b90fd8546ab02d09ce6d8f7f6b4a95e36f8f801a783fecf9d"
	I1029 08:23:06.677824   20550 cri.go:89] found id: "0c6b81641534122ae540f448d067dbd84061141d3143381b755c3d7f7ca5a3ab"
	I1029 08:23:06.677827   20550 cri.go:89] found id: "65705ac3c758b666b1f55a5f8f6d8e4d57b32abe7b2d77235aee8212d4b5d20f"
	I1029 08:23:06.677829   20550 cri.go:89] found id: "1fe5195ca5ae1f93eaf51b4e0526a47a8bb3bfa8e424a07fb66a6a9f81e65eb9"
	I1029 08:23:06.677832   20550 cri.go:89] found id: "5371c6256fe4ea45e90d7c3585d7bb2374128bb029b65fdcf3fdd6c3cb545117"
	I1029 08:23:06.677837   20550 cri.go:89] found id: "9b0733b1c46f199694daba309c64792dced18ab0f2b83341c9bb8a464c908faa"
	I1029 08:23:06.677843   20550 cri.go:89] found id: "197632c3e4940a86b33c2118188a5166f3fe06ff6190898cb1f2df1fbe028d75"
	I1029 08:23:06.677846   20550 cri.go:89] found id: "59926386ecab3854c03a360c1116f1d37b0a73222f47ac804c45f9d2d657b8e4"
	I1029 08:23:06.677850   20550 cri.go:89] found id: "570224acd5072d8b3a0600cf1fcca347b6bbac8f4c411782d6c8460dce94d302"
	I1029 08:23:06.677854   20550 cri.go:89] found id: "217f45f262a57c7196f21057bc6190abc33ede848212337a47b452ce93dae999"
	I1029 08:23:06.677858   20550 cri.go:89] found id: "6a10f82f1439a50fe59f05dd964f098ffac531c82cea62036993200facf7e0e6"
	I1029 08:23:06.677873   20550 cri.go:89] found id: "ff1e52067a5c8509cce80abc4079064ad91518967401e8a972d0957857b94894"
	I1029 08:23:06.677881   20550 cri.go:89] found id: "ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db"
	I1029 08:23:06.677886   20550 cri.go:89] found id: "11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3"
	I1029 08:23:06.677891   20550 cri.go:89] found id: "2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326"
	I1029 08:23:06.677895   20550 cri.go:89] found id: "a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110"
	I1029 08:23:06.677902   20550 cri.go:89] found id: "90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67"
	I1029 08:23:06.677907   20550 cri.go:89] found id: "56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3"
	I1029 08:23:06.677914   20550 cri.go:89] found id: "a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1"
	I1029 08:23:06.677919   20550 cri.go:89] found id: "49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd"
	I1029 08:23:06.677926   20550 cri.go:89] found id: ""
	I1029 08:23:06.677973   20550 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:23:06.692470   20550 out.go:203] 
	W1029 08:23:06.693868   20550 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:23:06.693888   20550 out.go:285] * 
	* 
	W1029 08:23:06.697161   20550 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:23:06.699030   20550 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-306574 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.15s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-306574 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-306574 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306574 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [1903312f-19eb-4e12-bf7c-18acfdfd9c46] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [1903312f-19eb-4e12-bf7c-18acfdfd9c46] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [1903312f-19eb-4e12-bf7c-18acfdfd9c46] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003708472s
addons_test.go:967: (dbg) Run:  kubectl --context addons-306574 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 ssh "cat /opt/local-path-provisioner/pvc-58b50d07-4433-469f-9454-6e846c678332_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-306574 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-306574 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306574 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (277.312905ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:23:09.033957   20824 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:23:09.034125   20824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:09.034136   20824 out.go:374] Setting ErrFile to fd 2...
	I1029 08:23:09.034140   20824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:23:09.034365   20824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:23:09.034624   20824 mustload.go:66] Loading cluster: addons-306574
	I1029 08:23:09.034935   20824 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:09.034950   20824 addons.go:607] checking whether the cluster is paused
	I1029 08:23:09.035043   20824 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:23:09.035058   20824 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:23:09.035438   20824 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:23:09.053939   20824 ssh_runner.go:195] Run: systemctl --version
	I1029 08:23:09.054011   20824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:23:09.074598   20824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:23:09.177892   20824 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:23:09.177964   20824 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:23:09.219874   20824 cri.go:89] found id: "d99432f91672d8c1d47f840f261c261a51f13955e2e017761d646d32c313587e"
	I1029 08:23:09.219900   20824 cri.go:89] found id: "9b8691b1023f8ec0251c9d8bc6b0b7d0fd5fea3ab2e89770788c518fb6ca60da"
	I1029 08:23:09.219905   20824 cri.go:89] found id: "e573e53bb23e05e993a3f287fd82b620791eb65b7ea5b1a15f165ae62002efa6"
	I1029 08:23:09.219920   20824 cri.go:89] found id: "e8d80d0af78a66b1efd4b4234f27af0b707a55e687da42d583a0ccac1dbc984f"
	I1029 08:23:09.219925   20824 cri.go:89] found id: "2e36d72e127f635b90fd8546ab02d09ce6d8f7f6b4a95e36f8f801a783fecf9d"
	I1029 08:23:09.219930   20824 cri.go:89] found id: "0c6b81641534122ae540f448d067dbd84061141d3143381b755c3d7f7ca5a3ab"
	I1029 08:23:09.219935   20824 cri.go:89] found id: "65705ac3c758b666b1f55a5f8f6d8e4d57b32abe7b2d77235aee8212d4b5d20f"
	I1029 08:23:09.219939   20824 cri.go:89] found id: "1fe5195ca5ae1f93eaf51b4e0526a47a8bb3bfa8e424a07fb66a6a9f81e65eb9"
	I1029 08:23:09.219943   20824 cri.go:89] found id: "5371c6256fe4ea45e90d7c3585d7bb2374128bb029b65fdcf3fdd6c3cb545117"
	I1029 08:23:09.219958   20824 cri.go:89] found id: "9b0733b1c46f199694daba309c64792dced18ab0f2b83341c9bb8a464c908faa"
	I1029 08:23:09.219965   20824 cri.go:89] found id: "197632c3e4940a86b33c2118188a5166f3fe06ff6190898cb1f2df1fbe028d75"
	I1029 08:23:09.219969   20824 cri.go:89] found id: "59926386ecab3854c03a360c1116f1d37b0a73222f47ac804c45f9d2d657b8e4"
	I1029 08:23:09.219973   20824 cri.go:89] found id: "570224acd5072d8b3a0600cf1fcca347b6bbac8f4c411782d6c8460dce94d302"
	I1029 08:23:09.219977   20824 cri.go:89] found id: "217f45f262a57c7196f21057bc6190abc33ede848212337a47b452ce93dae999"
	I1029 08:23:09.219980   20824 cri.go:89] found id: "6a10f82f1439a50fe59f05dd964f098ffac531c82cea62036993200facf7e0e6"
	I1029 08:23:09.219986   20824 cri.go:89] found id: "ff1e52067a5c8509cce80abc4079064ad91518967401e8a972d0957857b94894"
	I1029 08:23:09.220866   20824 cri.go:89] found id: "ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db"
	I1029 08:23:09.220880   20824 cri.go:89] found id: "11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3"
	I1029 08:23:09.220884   20824 cri.go:89] found id: "2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326"
	I1029 08:23:09.220888   20824 cri.go:89] found id: "a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110"
	I1029 08:23:09.220901   20824 cri.go:89] found id: "90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67"
	I1029 08:23:09.220905   20824 cri.go:89] found id: "56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3"
	I1029 08:23:09.220909   20824 cri.go:89] found id: "a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1"
	I1029 08:23:09.220912   20824 cri.go:89] found id: "49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd"
	I1029 08:23:09.220916   20824 cri.go:89] found id: ""
	I1029 08:23:09.220970   20824 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:23:09.241076   20824 out.go:203] 
	W1029 08:23:09.242559   20824 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:23:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:23:09.242587   20824 out.go:285] * 
	* 
	W1029 08:23:09.247452   20824 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:23:09.249632   20824 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-306574 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.15s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-fm5xc" [06902152-4c44-414b-afca-bd97070f4a44] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004053948s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306574 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (256.478266ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:22:50.600062   18394 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:22:50.600221   18394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:50.600232   18394 out.go:374] Setting ErrFile to fd 2...
	I1029 08:22:50.600238   18394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:50.600462   18394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:22:50.600741   18394 mustload.go:66] Loading cluster: addons-306574
	I1029 08:22:50.601161   18394 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:50.601183   18394 addons.go:607] checking whether the cluster is paused
	I1029 08:22:50.601287   18394 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:50.601310   18394 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:22:50.601726   18394 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:22:50.620204   18394 ssh_runner.go:195] Run: systemctl --version
	I1029 08:22:50.620272   18394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:22:50.639642   18394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:22:50.739095   18394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:22:50.739157   18394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:22:50.769500   18394 cri.go:89] found id: "d99432f91672d8c1d47f840f261c261a51f13955e2e017761d646d32c313587e"
	I1029 08:22:50.769528   18394 cri.go:89] found id: "9b8691b1023f8ec0251c9d8bc6b0b7d0fd5fea3ab2e89770788c518fb6ca60da"
	I1029 08:22:50.769535   18394 cri.go:89] found id: "e573e53bb23e05e993a3f287fd82b620791eb65b7ea5b1a15f165ae62002efa6"
	I1029 08:22:50.769541   18394 cri.go:89] found id: "e8d80d0af78a66b1efd4b4234f27af0b707a55e687da42d583a0ccac1dbc984f"
	I1029 08:22:50.769545   18394 cri.go:89] found id: "2e36d72e127f635b90fd8546ab02d09ce6d8f7f6b4a95e36f8f801a783fecf9d"
	I1029 08:22:50.769552   18394 cri.go:89] found id: "0c6b81641534122ae540f448d067dbd84061141d3143381b755c3d7f7ca5a3ab"
	I1029 08:22:50.769556   18394 cri.go:89] found id: "65705ac3c758b666b1f55a5f8f6d8e4d57b32abe7b2d77235aee8212d4b5d20f"
	I1029 08:22:50.769561   18394 cri.go:89] found id: "1fe5195ca5ae1f93eaf51b4e0526a47a8bb3bfa8e424a07fb66a6a9f81e65eb9"
	I1029 08:22:50.769565   18394 cri.go:89] found id: "5371c6256fe4ea45e90d7c3585d7bb2374128bb029b65fdcf3fdd6c3cb545117"
	I1029 08:22:50.769581   18394 cri.go:89] found id: "9b0733b1c46f199694daba309c64792dced18ab0f2b83341c9bb8a464c908faa"
	I1029 08:22:50.769591   18394 cri.go:89] found id: "197632c3e4940a86b33c2118188a5166f3fe06ff6190898cb1f2df1fbe028d75"
	I1029 08:22:50.769595   18394 cri.go:89] found id: "59926386ecab3854c03a360c1116f1d37b0a73222f47ac804c45f9d2d657b8e4"
	I1029 08:22:50.769599   18394 cri.go:89] found id: "570224acd5072d8b3a0600cf1fcca347b6bbac8f4c411782d6c8460dce94d302"
	I1029 08:22:50.769647   18394 cri.go:89] found id: "217f45f262a57c7196f21057bc6190abc33ede848212337a47b452ce93dae999"
	I1029 08:22:50.769660   18394 cri.go:89] found id: "6a10f82f1439a50fe59f05dd964f098ffac531c82cea62036993200facf7e0e6"
	I1029 08:22:50.769667   18394 cri.go:89] found id: "ff1e52067a5c8509cce80abc4079064ad91518967401e8a972d0957857b94894"
	I1029 08:22:50.769672   18394 cri.go:89] found id: "ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db"
	I1029 08:22:50.769679   18394 cri.go:89] found id: "11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3"
	I1029 08:22:50.769685   18394 cri.go:89] found id: "2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326"
	I1029 08:22:50.769693   18394 cri.go:89] found id: "a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110"
	I1029 08:22:50.769704   18394 cri.go:89] found id: "90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67"
	I1029 08:22:50.769723   18394 cri.go:89] found id: "56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3"
	I1029 08:22:50.769738   18394 cri.go:89] found id: "a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1"
	I1029 08:22:50.769744   18394 cri.go:89] found id: "49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd"
	I1029 08:22:50.769751   18394 cri.go:89] found id: ""
	I1029 08:22:50.769799   18394 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:22:50.784814   18394 out.go:203] 
	W1029 08:22:50.786310   18394 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:22:50.786340   18394 out.go:285] * 
	* 
	W1029 08:22:50.789534   18394 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:22:50.790641   18394 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-306574 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-njrr5" [ae1d37be-a3a4-4651-a403-b91db1fca95c] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003631389s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306574 addons disable yakd --alsologtostderr -v=1: exit status 11 (252.008639ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:22:53.215740   18573 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:22:53.216099   18573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:53.216109   18573 out.go:374] Setting ErrFile to fd 2...
	I1029 08:22:53.216113   18573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:53.216356   18573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:22:53.216666   18573 mustload.go:66] Loading cluster: addons-306574
	I1029 08:22:53.217019   18573 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:53.217037   18573 addons.go:607] checking whether the cluster is paused
	I1029 08:22:53.217136   18573 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:53.217158   18573 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:22:53.217589   18573 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:22:53.238106   18573 ssh_runner.go:195] Run: systemctl --version
	I1029 08:22:53.238159   18573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:22:53.256835   18573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:22:53.356093   18573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:22:53.356157   18573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:22:53.386621   18573 cri.go:89] found id: "d99432f91672d8c1d47f840f261c261a51f13955e2e017761d646d32c313587e"
	I1029 08:22:53.386645   18573 cri.go:89] found id: "9b8691b1023f8ec0251c9d8bc6b0b7d0fd5fea3ab2e89770788c518fb6ca60da"
	I1029 08:22:53.386650   18573 cri.go:89] found id: "e573e53bb23e05e993a3f287fd82b620791eb65b7ea5b1a15f165ae62002efa6"
	I1029 08:22:53.386654   18573 cri.go:89] found id: "e8d80d0af78a66b1efd4b4234f27af0b707a55e687da42d583a0ccac1dbc984f"
	I1029 08:22:53.386658   18573 cri.go:89] found id: "2e36d72e127f635b90fd8546ab02d09ce6d8f7f6b4a95e36f8f801a783fecf9d"
	I1029 08:22:53.386662   18573 cri.go:89] found id: "0c6b81641534122ae540f448d067dbd84061141d3143381b755c3d7f7ca5a3ab"
	I1029 08:22:53.386666   18573 cri.go:89] found id: "65705ac3c758b666b1f55a5f8f6d8e4d57b32abe7b2d77235aee8212d4b5d20f"
	I1029 08:22:53.386669   18573 cri.go:89] found id: "1fe5195ca5ae1f93eaf51b4e0526a47a8bb3bfa8e424a07fb66a6a9f81e65eb9"
	I1029 08:22:53.386673   18573 cri.go:89] found id: "5371c6256fe4ea45e90d7c3585d7bb2374128bb029b65fdcf3fdd6c3cb545117"
	I1029 08:22:53.386682   18573 cri.go:89] found id: "9b0733b1c46f199694daba309c64792dced18ab0f2b83341c9bb8a464c908faa"
	I1029 08:22:53.386686   18573 cri.go:89] found id: "197632c3e4940a86b33c2118188a5166f3fe06ff6190898cb1f2df1fbe028d75"
	I1029 08:22:53.386690   18573 cri.go:89] found id: "59926386ecab3854c03a360c1116f1d37b0a73222f47ac804c45f9d2d657b8e4"
	I1029 08:22:53.386695   18573 cri.go:89] found id: "570224acd5072d8b3a0600cf1fcca347b6bbac8f4c411782d6c8460dce94d302"
	I1029 08:22:53.386699   18573 cri.go:89] found id: "217f45f262a57c7196f21057bc6190abc33ede848212337a47b452ce93dae999"
	I1029 08:22:53.386705   18573 cri.go:89] found id: "6a10f82f1439a50fe59f05dd964f098ffac531c82cea62036993200facf7e0e6"
	I1029 08:22:53.386737   18573 cri.go:89] found id: "ff1e52067a5c8509cce80abc4079064ad91518967401e8a972d0957857b94894"
	I1029 08:22:53.386748   18573 cri.go:89] found id: "ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db"
	I1029 08:22:53.386751   18573 cri.go:89] found id: "11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3"
	I1029 08:22:53.386754   18573 cri.go:89] found id: "2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326"
	I1029 08:22:53.386756   18573 cri.go:89] found id: "a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110"
	I1029 08:22:53.386759   18573 cri.go:89] found id: "90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67"
	I1029 08:22:53.386762   18573 cri.go:89] found id: "56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3"
	I1029 08:22:53.386764   18573 cri.go:89] found id: "a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1"
	I1029 08:22:53.386766   18573 cri.go:89] found id: "49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd"
	I1029 08:22:53.386769   18573 cri.go:89] found id: ""
	I1029 08:22:53.386809   18573 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:22:53.401554   18573 out.go:203] 
	W1029 08:22:53.403109   18573 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:22:53.403139   18573 out.go:285] * 
	* 
	W1029 08:22:53.406625   18573 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:22:53.408041   18573 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-306574 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-f4ngl" [ffb7bad7-9c62-431a-b7cc-47e06a813d29] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003955808s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306574 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306574 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (263.349453ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:22:50.606522   18393 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:22:50.606819   18393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:50.606828   18393 out.go:374] Setting ErrFile to fd 2...
	I1029 08:22:50.606833   18393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:22:50.607005   18393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:22:50.607267   18393 mustload.go:66] Loading cluster: addons-306574
	I1029 08:22:50.607545   18393 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:50.607558   18393 addons.go:607] checking whether the cluster is paused
	I1029 08:22:50.607633   18393 config.go:182] Loaded profile config "addons-306574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:50.607646   18393 host.go:66] Checking if "addons-306574" exists ...
	I1029 08:22:50.608020   18393 cli_runner.go:164] Run: docker container inspect addons-306574 --format={{.State.Status}}
	I1029 08:22:50.627669   18393 ssh_runner.go:195] Run: systemctl --version
	I1029 08:22:50.627726   18393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306574
	I1029 08:22:50.645452   18393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/addons-306574/id_rsa Username:docker}
	I1029 08:22:50.744300   18393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:22:50.744370   18393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:22:50.775315   18393 cri.go:89] found id: "d99432f91672d8c1d47f840f261c261a51f13955e2e017761d646d32c313587e"
	I1029 08:22:50.775351   18393 cri.go:89] found id: "9b8691b1023f8ec0251c9d8bc6b0b7d0fd5fea3ab2e89770788c518fb6ca60da"
	I1029 08:22:50.775355   18393 cri.go:89] found id: "e573e53bb23e05e993a3f287fd82b620791eb65b7ea5b1a15f165ae62002efa6"
	I1029 08:22:50.775359   18393 cri.go:89] found id: "e8d80d0af78a66b1efd4b4234f27af0b707a55e687da42d583a0ccac1dbc984f"
	I1029 08:22:50.775362   18393 cri.go:89] found id: "2e36d72e127f635b90fd8546ab02d09ce6d8f7f6b4a95e36f8f801a783fecf9d"
	I1029 08:22:50.775366   18393 cri.go:89] found id: "0c6b81641534122ae540f448d067dbd84061141d3143381b755c3d7f7ca5a3ab"
	I1029 08:22:50.775368   18393 cri.go:89] found id: "65705ac3c758b666b1f55a5f8f6d8e4d57b32abe7b2d77235aee8212d4b5d20f"
	I1029 08:22:50.775371   18393 cri.go:89] found id: "1fe5195ca5ae1f93eaf51b4e0526a47a8bb3bfa8e424a07fb66a6a9f81e65eb9"
	I1029 08:22:50.775373   18393 cri.go:89] found id: "5371c6256fe4ea45e90d7c3585d7bb2374128bb029b65fdcf3fdd6c3cb545117"
	I1029 08:22:50.775385   18393 cri.go:89] found id: "9b0733b1c46f199694daba309c64792dced18ab0f2b83341c9bb8a464c908faa"
	I1029 08:22:50.775393   18393 cri.go:89] found id: "197632c3e4940a86b33c2118188a5166f3fe06ff6190898cb1f2df1fbe028d75"
	I1029 08:22:50.775396   18393 cri.go:89] found id: "59926386ecab3854c03a360c1116f1d37b0a73222f47ac804c45f9d2d657b8e4"
	I1029 08:22:50.775398   18393 cri.go:89] found id: "570224acd5072d8b3a0600cf1fcca347b6bbac8f4c411782d6c8460dce94d302"
	I1029 08:22:50.775401   18393 cri.go:89] found id: "217f45f262a57c7196f21057bc6190abc33ede848212337a47b452ce93dae999"
	I1029 08:22:50.775404   18393 cri.go:89] found id: "6a10f82f1439a50fe59f05dd964f098ffac531c82cea62036993200facf7e0e6"
	I1029 08:22:50.775415   18393 cri.go:89] found id: "ff1e52067a5c8509cce80abc4079064ad91518967401e8a972d0957857b94894"
	I1029 08:22:50.775421   18393 cri.go:89] found id: "ea0d3827c6799a404017b1447bbdc7e4f84f5ad3a21173703ca613e30d11d6db"
	I1029 08:22:50.775425   18393 cri.go:89] found id: "11ad0d3d51574f26a6e8ee3c781bf07f94f35fef399b31d7b76ecbc492516dd3"
	I1029 08:22:50.775428   18393 cri.go:89] found id: "2df32f1d553fcb4fbc3adde337ec5b7526e2a55bc93c97e9842311e3b25af326"
	I1029 08:22:50.775430   18393 cri.go:89] found id: "a41a72a7acd69b2a4429c7e311371b96f974af318dfa41734f9480cb7158d110"
	I1029 08:22:50.775432   18393 cri.go:89] found id: "90b2a91e7069cd69fc8b922a789cfdd7c81a8374454e58ca038a68bd0bb07d67"
	I1029 08:22:50.775435   18393 cri.go:89] found id: "56022b3e8de6cdcb26deeff8b32f6fd620cf216d39840756cc0e1f2c646d6ef3"
	I1029 08:22:50.775437   18393 cri.go:89] found id: "a2eacbffa27c9dda5b879ad65a85ff4f64734b5761c5ba64c5e1b95edfe7edc1"
	I1029 08:22:50.775439   18393 cri.go:89] found id: "49643bd1cddf5decfec45aac937524ad56dc098374006fcd916747c11ff71afd"
	I1029 08:22:50.775442   18393 cri.go:89] found id: ""
	I1029 08:22:50.775488   18393 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 08:22:50.789741   18393 out.go:203] 
	W1029 08:22:50.790700   18393 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:22:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 08:22:50.790721   18393 out.go:285] * 
	* 
	W1029 08:22:50.794042   18393 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 08:22:50.795267   18393 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-306574 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-985165 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-985165 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-qsb8z" [ec6705c8-c6ae-4666-8c1c-60b2627ef624] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-985165 -n functional-985165
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-29 08:38:28.640917188 +0000 UTC m=+1110.673136641
functional_test.go:1645: (dbg) Run:  kubectl --context functional-985165 describe po hello-node-connect-7d85dfc575-qsb8z -n default
functional_test.go:1645: (dbg) kubectl --context functional-985165 describe po hello-node-connect-7d85dfc575-qsb8z -n default:
Name:             hello-node-connect-7d85dfc575-qsb8z
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-985165/192.168.49.2
Start Time:       Wed, 29 Oct 2025 08:28:28 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8dmn9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8dmn9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-qsb8z to functional-985165
Normal   Pulling    7m6s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m49s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m36s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-985165 logs hello-node-connect-7d85dfc575-qsb8z -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-985165 logs hello-node-connect-7d85dfc575-qsb8z -n default: exit status 1 (77.224337ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-qsb8z" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-985165 logs hello-node-connect-7d85dfc575-qsb8z -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-985165 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-qsb8z
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-985165/192.168.49.2
Start Time:       Wed, 29 Oct 2025 08:28:28 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8dmn9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8dmn9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-qsb8z to functional-985165
Normal   Pulling    7m6s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m49s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m36s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-985165 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-985165 logs -l app=hello-node-connect: exit status 1 (74.72899ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-qsb8z" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-985165 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-985165 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.104.5
IPs:                      10.111.104.5
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31988/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-985165
helpers_test.go:243: (dbg) docker inspect functional-985165:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "63af2b1967ac26a52f50ac8b75ca3ece3ca400f5e7295a6029a35730f862f1fb",
	        "Created": "2025-10-29T08:26:28.620790156Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 31585,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T08:26:28.662923956Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/63af2b1967ac26a52f50ac8b75ca3ece3ca400f5e7295a6029a35730f862f1fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/63af2b1967ac26a52f50ac8b75ca3ece3ca400f5e7295a6029a35730f862f1fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/63af2b1967ac26a52f50ac8b75ca3ece3ca400f5e7295a6029a35730f862f1fb/hosts",
	        "LogPath": "/var/lib/docker/containers/63af2b1967ac26a52f50ac8b75ca3ece3ca400f5e7295a6029a35730f862f1fb/63af2b1967ac26a52f50ac8b75ca3ece3ca400f5e7295a6029a35730f862f1fb-json.log",
	        "Name": "/functional-985165",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-985165:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-985165",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "63af2b1967ac26a52f50ac8b75ca3ece3ca400f5e7295a6029a35730f862f1fb",
	                "LowerDir": "/var/lib/docker/overlay2/d67044a42aed5ac59462723b94d8aa5ac71a5cba14866bba4bd101277d5af8e0-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d67044a42aed5ac59462723b94d8aa5ac71a5cba14866bba4bd101277d5af8e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d67044a42aed5ac59462723b94d8aa5ac71a5cba14866bba4bd101277d5af8e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d67044a42aed5ac59462723b94d8aa5ac71a5cba14866bba4bd101277d5af8e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-985165",
	                "Source": "/var/lib/docker/volumes/functional-985165/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-985165",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-985165",
	                "name.minikube.sigs.k8s.io": "functional-985165",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "37a7a09547806121e02fcc601c70e6c0ca5605cdfac319583d45d1284787b6a1",
	            "SandboxKey": "/var/run/docker/netns/37a7a0954780",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-985165": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:d9:5a:da:a5:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4085329c9db0bcd4e28f51d2e94c1e54b7d28ca1a5c33d558602d7e53a5b2946",
	                    "EndpointID": "c7e0a57174a50ce46e4e493838983257e03b3281f828c6aa9e710fb8c3f547ba",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-985165",
	                        "63af2b1967ac"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-985165 -n functional-985165
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-985165 logs -n 25: (1.333218281s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-985165 ssh -- ls -la /mount-9p                                                                          │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │ 29 Oct 25 08:28 UTC │
	│ ssh            │ functional-985165 ssh sudo umount -f /mount-9p                                                                     │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │                     │
	│ mount          │ -p functional-985165 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1945535101/001:/mount2 --alsologtostderr -v=1 │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │                     │
	│ mount          │ -p functional-985165 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1945535101/001:/mount3 --alsologtostderr -v=1 │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │                     │
	│ ssh            │ functional-985165 ssh findmnt -T /mount1                                                                           │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │                     │
	│ mount          │ -p functional-985165 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1945535101/001:/mount1 --alsologtostderr -v=1 │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │                     │
	│ ssh            │ functional-985165 ssh findmnt -T /mount1                                                                           │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │ 29 Oct 25 08:28 UTC │
	│ ssh            │ functional-985165 ssh findmnt -T /mount2                                                                           │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │ 29 Oct 25 08:28 UTC │
	│ ssh            │ functional-985165 ssh findmnt -T /mount3                                                                           │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │ 29 Oct 25 08:28 UTC │
	│ mount          │ -p functional-985165 --kill=true                                                                                   │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │                     │
	│ start          │ -p functional-985165 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │                     │
	│ start          │ -p functional-985165 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │                     │
	│ start          │ -p functional-985165 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-985165 --alsologtostderr -v=1                                                     │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │ 29 Oct 25 08:28 UTC │
	│ update-context │ functional-985165 update-context --alsologtostderr -v=2                                                            │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │ 29 Oct 25 08:28 UTC │
	│ update-context │ functional-985165 update-context --alsologtostderr -v=2                                                            │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │ 29 Oct 25 08:28 UTC │
	│ update-context │ functional-985165 update-context --alsologtostderr -v=2                                                            │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │ 29 Oct 25 08:28 UTC │
	│ image          │ functional-985165 image ls --format short --alsologtostderr                                                        │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │ 29 Oct 25 08:28 UTC │
	│ ssh            │ functional-985165 ssh pgrep buildkitd                                                                              │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │                     │
	│ image          │ functional-985165 image build -t localhost/my-image:functional-985165 testdata/build --alsologtostderr             │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │ 29 Oct 25 08:28 UTC │
	│ image          │ functional-985165 image ls                                                                                         │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │ 29 Oct 25 08:28 UTC │
	│ image          │ functional-985165 image ls --format yaml --alsologtostderr                                                         │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │ 29 Oct 25 08:28 UTC │
	│ image          │ functional-985165 image ls --format json --alsologtostderr                                                         │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │ 29 Oct 25 08:28 UTC │
	│ image          │ functional-985165 image ls --format table --alsologtostderr                                                        │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:28 UTC │ 29 Oct 25 08:28 UTC │
	│ service        │ functional-985165 service list                                                                                     │ functional-985165 │ jenkins │ v1.37.0 │ 29 Oct 25 08:38 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 08:28:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 08:28:35.322880   46812 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:28:35.323232   46812 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:28:35.323245   46812 out.go:374] Setting ErrFile to fd 2...
	I1029 08:28:35.323251   46812 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:28:35.323448   46812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:28:35.323901   46812 out.go:368] Setting JSON to false
	I1029 08:28:35.324848   46812 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":663,"bootTime":1761725852,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 08:28:35.324937   46812 start.go:143] virtualization: kvm guest
	I1029 08:28:35.326775   46812 out.go:179] * [functional-985165] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 08:28:35.328171   46812 notify.go:221] Checking for updates...
	I1029 08:28:35.328198   46812 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:28:35.329438   46812 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:28:35.330487   46812 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 08:28:35.331679   46812 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 08:28:35.332882   46812 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 08:28:35.333977   46812 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:28:35.335638   46812 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:28:35.336146   46812 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:28:35.360847   46812 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 08:28:35.360943   46812 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:28:35.419738   46812 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-29 08:28:35.408587601 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 08:28:35.419854   46812 docker.go:319] overlay module found
	I1029 08:28:35.421348   46812 out.go:179] * Using the docker driver based on existing profile
	I1029 08:28:35.422458   46812 start.go:309] selected driver: docker
	I1029 08:28:35.422470   46812 start.go:930] validating driver "docker" against &{Name:functional-985165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-985165 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:28:35.422552   46812 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:28:35.422632   46812 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:28:35.481749   46812 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-29 08:28:35.471493595 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 08:28:35.482465   46812 cni.go:84] Creating CNI manager for ""
	I1029 08:28:35.482524   46812 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 08:28:35.482593   46812 start.go:353] cluster config:
	{Name:functional-985165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-985165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:28:35.484232   46812 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 29 08:28:41 functional-985165 crio[3577]: time="2025-10-29T08:28:41.303586629Z" level=info msg="Started container" PID=7609 containerID=5a9d86231cfa3e49d56c7e49b2d0f8365c1dfeb6e644878b143a26c1bc8588f8 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nxnz6/kubernetes-dashboard id=abb619be-ea3f-4669-86c9-96dfd2946eda name=/runtime.v1.RuntimeService/StartContainer sandboxID=7fb5586d404c81636dc1844ad1e0d02e869267a30fa802b11880a258d989a4aa
	Oct 29 08:28:43 functional-985165 crio[3577]: time="2025-10-29T08:28:43.937318265Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4db29be3-9cc5-4dee-ab23-491195d9a739 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:28:49 functional-985165 crio[3577]: time="2025-10-29T08:28:49.924453694Z" level=info msg="Stopping pod sandbox: 3e08e8e8feb7f9879c5a62dc78325fc039bcb82fb7cef98fd0422bbeb0f65d2d" id=deced993-0e92-4fe0-9c5a-8f69b80c88a6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 29 08:28:49 functional-985165 crio[3577]: time="2025-10-29T08:28:49.924514926Z" level=info msg="Stopped pod sandbox (already stopped): 3e08e8e8feb7f9879c5a62dc78325fc039bcb82fb7cef98fd0422bbeb0f65d2d" id=deced993-0e92-4fe0-9c5a-8f69b80c88a6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 29 08:28:49 functional-985165 crio[3577]: time="2025-10-29T08:28:49.924930973Z" level=info msg="Removing pod sandbox: 3e08e8e8feb7f9879c5a62dc78325fc039bcb82fb7cef98fd0422bbeb0f65d2d" id=1fd52fa0-8b7f-4476-ae23-ee0271f5a4bf name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 29 08:28:49 functional-985165 crio[3577]: time="2025-10-29T08:28:49.928041716Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 29 08:28:49 functional-985165 crio[3577]: time="2025-10-29T08:28:49.928098606Z" level=info msg="Removed pod sandbox: 3e08e8e8feb7f9879c5a62dc78325fc039bcb82fb7cef98fd0422bbeb0f65d2d" id=1fd52fa0-8b7f-4476-ae23-ee0271f5a4bf name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 29 08:28:49 functional-985165 crio[3577]: time="2025-10-29T08:28:49.928528339Z" level=info msg="Stopping pod sandbox: 2ee53a27ed13d7d91d7e5bd7c04aac1d579cca72baaae625ed2610ca60d9407b" id=ce1d6b9f-3045-42e7-9e41-b763a69d71b7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 29 08:28:49 functional-985165 crio[3577]: time="2025-10-29T08:28:49.928578869Z" level=info msg="Stopped pod sandbox (already stopped): 2ee53a27ed13d7d91d7e5bd7c04aac1d579cca72baaae625ed2610ca60d9407b" id=ce1d6b9f-3045-42e7-9e41-b763a69d71b7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 29 08:28:49 functional-985165 crio[3577]: time="2025-10-29T08:28:49.928879199Z" level=info msg="Removing pod sandbox: 2ee53a27ed13d7d91d7e5bd7c04aac1d579cca72baaae625ed2610ca60d9407b" id=6583c7f4-bc3b-4b79-ae17-2025f5b0509b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 29 08:28:49 functional-985165 crio[3577]: time="2025-10-29T08:28:49.931094258Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 29 08:28:49 functional-985165 crio[3577]: time="2025-10-29T08:28:49.931147638Z" level=info msg="Removed pod sandbox: 2ee53a27ed13d7d91d7e5bd7c04aac1d579cca72baaae625ed2610ca60d9407b" id=6583c7f4-bc3b-4b79-ae17-2025f5b0509b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 29 08:28:49 functional-985165 crio[3577]: time="2025-10-29T08:28:49.931539492Z" level=info msg="Stopping pod sandbox: b37c98430faf629ad8732b728d8b4bef9702917aca8376f71542c76d8052f150" id=469a43bb-4899-49eb-8159-412b8d47d25d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 29 08:28:49 functional-985165 crio[3577]: time="2025-10-29T08:28:49.931596094Z" level=info msg="Stopped pod sandbox (already stopped): b37c98430faf629ad8732b728d8b4bef9702917aca8376f71542c76d8052f150" id=469a43bb-4899-49eb-8159-412b8d47d25d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 29 08:28:49 functional-985165 crio[3577]: time="2025-10-29T08:28:49.931935147Z" level=info msg="Removing pod sandbox: b37c98430faf629ad8732b728d8b4bef9702917aca8376f71542c76d8052f150" id=adbbf29b-ce90-4678-8fb1-6891d10b99bc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 29 08:28:49 functional-985165 crio[3577]: time="2025-10-29T08:28:49.934017884Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 29 08:28:49 functional-985165 crio[3577]: time="2025-10-29T08:28:49.934070826Z" level=info msg="Removed pod sandbox: b37c98430faf629ad8732b728d8b4bef9702917aca8376f71542c76d8052f150" id=adbbf29b-ce90-4678-8fb1-6891d10b99bc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 29 08:29:09 functional-985165 crio[3577]: time="2025-10-29T08:29:09.937391532Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=add0221e-d3a4-4619-bd39-595055661b3c name=/runtime.v1.ImageService/PullImage
	Oct 29 08:29:10 functional-985165 crio[3577]: time="2025-10-29T08:29:10.936974252Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=400ab84d-c61b-4df8-b50b-c1b1479d17dd name=/runtime.v1.ImageService/PullImage
	Oct 29 08:30:01 functional-985165 crio[3577]: time="2025-10-29T08:30:01.936560342Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=42f58b39-fe44-4b97-b2de-05eb52f8215f name=/runtime.v1.ImageService/PullImage
	Oct 29 08:30:04 functional-985165 crio[3577]: time="2025-10-29T08:30:04.93708002Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=510ffb8c-96b8-4781-8581-57382ec3757d name=/runtime.v1.ImageService/PullImage
	Oct 29 08:31:22 functional-985165 crio[3577]: time="2025-10-29T08:31:22.93716141Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d921350d-99ef-4260-9aa0-ad41744795fe name=/runtime.v1.ImageService/PullImage
	Oct 29 08:31:35 functional-985165 crio[3577]: time="2025-10-29T08:31:35.937285715Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2a4a23c1-cc83-496b-b85a-f3e6ea68d6e8 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:34:07 functional-985165 crio[3577]: time="2025-10-29T08:34:07.936670084Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d694689f-d1b4-48c6-9f5c-0962c9035ef4 name=/runtime.v1.ImageService/PullImage
	Oct 29 08:34:24 functional-985165 crio[3577]: time="2025-10-29T08:34:24.937332945Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=24de2eaf-e9d2-4898-9185-76ea5c09e7ee name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5a9d86231cfa3       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   7fb5586d404c8       kubernetes-dashboard-855c9754f9-nxnz6        kubernetes-dashboard
	04e85f53b9a81       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   e6a7e9ea4788b       dashboard-metrics-scraper-77bf4d6c4c-krrwd   kubernetes-dashboard
	cb48c7cde6993       docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58                  9 minutes ago       Running             myfrontend                  0                   a8df50e8d6433       sp-pod                                       default
	c13f7b16d06f2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              10 minutes ago      Exited              mount-munger                0                   9f36e3abe32a0       busybox-mount                                default
	9861c02a8eb7c       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   5a38396fd8f41       nginx-svc                                    default
	9fb51645f248b       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   08f19383a205d       mysql-5bb876957f-jqcfr                       default
	81435fa9be113       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   888c797a5d7d8       kube-apiserver-functional-985165             kube-system
	e37497ded1173       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   bab187ae919f4       kube-controller-manager-functional-985165    kube-system
	0c1489fb57439       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   a59c93f3a4451       etcd-functional-985165                       kube-system
	26c2ee426d927       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   6b1cac7afc1e8       kube-scheduler-functional-985165             kube-system
	63201194b95b2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   a126cd1238b5b       coredns-66bc5c9577-6gznf                     kube-system
	5f799e27c708a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     1                   bab187ae919f4       kube-controller-manager-functional-985165    kube-system
	a7447ffbcb24c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   c3b64ee07dac9       storage-provisioner                          kube-system
	235cbcf3d293d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   eff2f3e8d810d       kindnet-lkqmv                                kube-system
	018662b761b86       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  1                   eab1fc6111290       kube-proxy-6dk6h                             kube-system
	bcef6ccc2027c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   a126cd1238b5b       coredns-66bc5c9577-6gznf                     kube-system
	fd56e84cd48bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   c3b64ee07dac9       storage-provisioner                          kube-system
	b9750e483f471       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   eab1fc6111290       kube-proxy-6dk6h                             kube-system
	887d1cad4cf57       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   eff2f3e8d810d       kindnet-lkqmv                                kube-system
	b3b7d3420bf79       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   6b1cac7afc1e8       kube-scheduler-functional-985165             kube-system
	20e305a69de94       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   a59c93f3a4451       etcd-functional-985165                       kube-system
	
	
	==> coredns [63201194b95b2c21d4829aa3aab072479252c630b082c55ef6b3cfc5f871c02d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49876 - 37170 "HINFO IN 4219446538106417252.2434904029384674227. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.088587464s
	
	
	==> coredns [bcef6ccc2027c72c8ded3272652b125820c2b3a55b14bc22071ac4145524ed54] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48697 - 35203 "HINFO IN 8689067978768108871.8486333552842983765. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.513382491s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-985165
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-985165
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=functional-985165
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T08_26_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:26:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-985165
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:38:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 08:37:44 +0000   Wed, 29 Oct 2025 08:26:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 08:37:44 +0000   Wed, 29 Oct 2025 08:26:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 08:37:44 +0000   Wed, 29 Oct 2025 08:26:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 08:37:44 +0000   Wed, 29 Oct 2025 08:27:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-985165
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                6668b755-308c-4d04-b58d-55c7369011f7
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hsbk2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-qsb8z           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-jqcfr                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m58s
	  kube-system                 coredns-66bc5c9577-6gznf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-985165                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-lkqmv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-985165              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-985165     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-6dk6h                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-985165              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-krrwd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-nxnz6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-985165 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-985165 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-985165 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-985165 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-985165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-985165 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-985165 event: Registered Node functional-985165 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-985165 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-985165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-985165 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-985165 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-985165 event: Registered Node functional-985165 in Controller
	
	
	==> dmesg <==
	[  +0.101648] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029373] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.989088] kauditd_printk_skb: 47 callbacks suppressed
	[Oct29 08:23] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.056844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.023928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000035] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.023834] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +2.047751] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +4.031591] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +8.063160] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[ +16.382216] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 08:24] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	
	
	==> etcd [0c1489fb57439dc25c07a8a4c861024e146026af26f84a2dac1920218686bca3] <==
	{"level":"warn","ts":"2025-10-29T08:27:51.243938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.251440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.259655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.267674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.274045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.281880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.288039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.294855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.302391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.309409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.318403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.325254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.332315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.338611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.352017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.358768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.365743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.389417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.396348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.402519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:27:51.444211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52546","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-29T08:28:20.499223Z","caller":"traceutil/trace.go:172","msg":"trace[146042258] transaction","detail":"{read_only:false; response_revision:673; number_of_response:1; }","duration":"115.356602ms","start":"2025-10-29T08:28:20.383842Z","end":"2025-10-29T08:28:20.499199Z","steps":["trace[146042258] 'process raft request'  (duration: 115.212596ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T08:37:50.949059Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1179}
	{"level":"info","ts":"2025-10-29T08:37:50.968879Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1179,"took":"19.447745ms","hash":2264453812,"current-db-size-bytes":3592192,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":1724416,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-10-29T08:37:50.968944Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2264453812,"revision":1179,"compact-revision":-1}
	
	
	==> etcd [20e305a69de94eb73b8b86d7d2da13956f959a01b379e7cfe8fc2dcc2f1c1fe6] <==
	{"level":"warn","ts":"2025-10-29T08:26:43.306813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:26:43.312912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:26:43.319514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:26:43.326329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:26:43.341618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:26:43.348542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T08:26:43.354946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51226","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-29T08:27:30.508492Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-29T08:27:30.508770Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-985165","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-29T08:27:30.508899Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-29T08:27:37.511034Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-29T08:27:37.514676Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-29T08:27:37.514732Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-29T08:27:37.514782Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-29T08:27:37.514749Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-29T08:27:37.514807Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-29T08:27:37.514794Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-10-29T08:27:37.514822Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-29T08:27:37.514759Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-29T08:27:37.514846Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-29T08:27:37.514852Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-29T08:27:37.517070Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-29T08:27:37.517132Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-29T08:27:37.517160Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-29T08:27:37.517170Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-985165","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 08:38:30 up 20 min,  0 user,  load average: 1.08, 0.52, 0.47
	Linux functional-985165 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [235cbcf3d293dde40ad1aa2b1a45795a986f297f671de7520e7b2784760a0386] <==
	I1029 08:36:20.925591       1 main.go:301] handling current node
	I1029 08:36:30.924082       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:36:30.924118       1 main.go:301] handling current node
	I1029 08:36:40.929358       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:36:40.929393       1 main.go:301] handling current node
	I1029 08:36:50.926089       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:36:50.926148       1 main.go:301] handling current node
	I1029 08:37:00.930070       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:37:00.930105       1 main.go:301] handling current node
	I1029 08:37:10.929664       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:37:10.929701       1 main.go:301] handling current node
	I1029 08:37:20.927115       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:37:20.927459       1 main.go:301] handling current node
	I1029 08:37:30.925602       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:37:30.925639       1 main.go:301] handling current node
	I1029 08:37:40.923709       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:37:40.923776       1 main.go:301] handling current node
	I1029 08:37:50.925150       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:37:50.925191       1 main.go:301] handling current node
	I1029 08:38:00.930644       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:38:00.930692       1 main.go:301] handling current node
	I1029 08:38:10.925131       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:38:10.925165       1 main.go:301] handling current node
	I1029 08:38:20.925551       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:38:20.925584       1 main.go:301] handling current node
	
	
	==> kindnet [887d1cad4cf57f5b746a2316abafbe61e9384e3b1ba050dc67eda65cf0ddec3e] <==
	I1029 08:26:52.189840       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 08:26:52.190151       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1029 08:26:52.190303       1 main.go:148] setting mtu 1500 for CNI 
	I1029 08:26:52.190319       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 08:26:52.190338       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T08:26:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 08:26:52.486568       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 08:26:52.486612       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 08:26:52.486625       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 08:26:52.486788       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 08:26:52.786766       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 08:26:52.786795       1 metrics.go:72] Registering metrics
	I1029 08:26:52.786847       1 controller.go:711] "Syncing nftables rules"
	I1029 08:27:02.490319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:27:02.490380       1 main.go:301] handling current node
	I1029 08:27:12.486910       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:27:12.486950       1 main.go:301] handling current node
	I1029 08:27:22.486576       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1029 08:27:22.486623       1 main.go:301] handling current node
	
	
	==> kube-apiserver [81435fa9be11387d1f1271826e482ea68b1daca4c570d9d0e191ab1572099354] <==
	I1029 08:27:51.944173       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 08:27:51.946665       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 08:27:51.946708       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 08:27:52.814359       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1029 08:27:53.119552       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1029 08:27:53.120832       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 08:27:53.125410       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 08:27:53.794697       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 08:27:53.892137       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 08:27:53.952822       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 08:27:53.960590       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 08:27:57.621488       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 08:28:07.598139       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.58.194"}
	I1029 08:28:12.844050       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.50.195"}
	I1029 08:28:15.980066       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.202.6"}
	E1029 08:28:26.985405       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53398: use of closed network connection
	E1029 08:28:28.099841       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53434: use of closed network connection
	I1029 08:28:28.272718       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.104.5"}
	I1029 08:28:28.332685       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.166.51"}
	E1029 08:28:30.993179       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53476: use of closed network connection
	I1029 08:28:36.347388       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 08:28:36.452400       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.208.192"}
	I1029 08:28:36.464216       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.244.185"}
	E1029 08:28:39.360009       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:59888: use of closed network connection
	I1029 08:37:51.846436       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5f799e27c708acf270e9bbe493d4c7ee58ae41bf20c84bab0c107cc7974bdd67] <==
	I1029 08:27:31.077584       1 serving.go:386] Generated self-signed cert in-memory
	I1029 08:27:31.307582       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1029 08:27:31.307604       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:27:31.309019       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1029 08:27:31.309042       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1029 08:27:31.310052       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1029 08:27:31.310706       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 08:27:31.317586       1 controllermanager.go:781] "Started controller" controller="serviceaccount-token-controller"
	I1029 08:27:31.317618       1 shared_informer.go:349] "Waiting for caches to sync" controller="tokens"
	I1029 08:27:40.586969       1 controllermanager.go:781] "Started controller" controller="endpoints-controller"
	I1029 08:27:40.587086       1 endpoints_controller.go:188] "Starting endpoint controller" logger="endpoints-controller"
	I1029 08:27:40.587106       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint"
	F1029 08:27:40.587401       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/job-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-controller-manager [e37497ded117382d5050a01602637c0c10f082d69df08884849f40fdec22e33f] <==
	I1029 08:27:55.228438       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1029 08:27:55.232688       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 08:27:55.234954       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1029 08:27:55.241304       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 08:27:55.249168       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1029 08:27:55.249190       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 08:27:55.249247       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 08:27:55.249271       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 08:27:55.249242       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 08:27:55.249337       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 08:27:55.249367       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 08:27:55.249344       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 08:27:55.249742       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1029 08:27:55.249769       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1029 08:27:55.252927       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1029 08:27:55.255625       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 08:27:55.256834       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 08:27:55.260081       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1029 08:27:55.271475       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1029 08:28:36.392962       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1029 08:28:36.397388       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1029 08:28:36.401500       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1029 08:28:36.402243       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1029 08:28:36.406562       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1029 08:28:36.411129       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [018662b761b8617e971c457a4517bc48ba1262e67c25734ddf674fc33bb29900] <==
	I1029 08:27:30.605299       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 08:27:30.705474       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 08:27:30.705515       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1029 08:27:30.705602       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 08:27:30.725877       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 08:27:30.725926       1 server_linux.go:132] "Using iptables Proxier"
	I1029 08:27:30.731489       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 08:27:30.731823       1 server.go:527] "Version info" version="v1.34.1"
	I1029 08:27:30.731842       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:27:30.733131       1 config.go:200] "Starting service config controller"
	I1029 08:27:30.733155       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 08:27:30.733247       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 08:27:30.733255       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 08:27:30.733281       1 config.go:309] "Starting node config controller"
	I1029 08:27:30.733285       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 08:27:30.733292       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 08:27:30.733651       1 config.go:106] "Starting endpoint slice config controller"
	I1029 08:27:30.733685       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 08:27:30.833242       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 08:27:30.834394       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 08:27:30.834481       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	E1029 08:27:51.849403       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 08:27:51.849408       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1029 08:27:51.849431       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1029 08:27:51.849513       1 reflector.go:205] "Failed to watch" err="nodes \"functional-985165\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [b9750e483f4714d95324768be654ca36e810e1b46d590a6c636384d0591b409d] <==
	I1029 08:26:52.057667       1 server_linux.go:53] "Using iptables proxy"
	I1029 08:26:52.115865       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 08:26:52.216925       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 08:26:52.216966       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1029 08:26:52.217072       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 08:26:52.236781       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 08:26:52.236843       1 server_linux.go:132] "Using iptables Proxier"
	I1029 08:26:52.242246       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 08:26:52.242688       1 server.go:527] "Version info" version="v1.34.1"
	I1029 08:26:52.242776       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:26:52.244428       1 config.go:200] "Starting service config controller"
	I1029 08:26:52.244451       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 08:26:52.244478       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 08:26:52.244498       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 08:26:52.244546       1 config.go:106] "Starting endpoint slice config controller"
	I1029 08:26:52.244560       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 08:26:52.244578       1 config.go:309] "Starting node config controller"
	I1029 08:26:52.244585       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 08:26:52.344667       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 08:26:52.344692       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 08:26:52.344730       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 08:26:52.344836       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [26c2ee426d927134ecf8cd536f69bd32220dbf6405744827c35fbd75316f0307] <==
	E1029 08:27:44.776861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1029 08:27:46.760303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 08:27:46.780959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 08:27:46.882299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 08:27:47.607285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 08:27:48.024113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 08:27:48.309927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1029 08:27:48.443195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 08:27:48.839312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 08:27:48.850927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 08:27:48.907951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 08:27:49.270779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1029 08:27:49.512744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 08:27:49.528601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 08:27:49.531048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 08:27:49.586912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 08:27:49.859394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 08:27:49.891895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 08:27:49.899574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 08:27:50.029541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 08:27:50.135618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 08:27:50.398396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1029 08:27:56.255680       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 08:27:58.955582       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1029 08:27:59.855437       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [b3b7d3420bf79137cd135e83a9b40e92eb4a0fb18240238fd4d6e0b159115a0f] <==
	E1029 08:26:43.799551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 08:26:43.799610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 08:26:43.799618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 08:26:43.799665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 08:26:43.799749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 08:26:43.799766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 08:26:43.799784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 08:26:44.632543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 08:26:44.632550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 08:26:44.657317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 08:26:44.684820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 08:26:44.789405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 08:26:44.963253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 08:26:44.984439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 08:26:44.985339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 08:26:44.991740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 08:26:44.994807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 08:26:45.056585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1029 08:26:45.397042       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 08:27:30.398362       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 08:27:30.398414       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1029 08:27:30.398435       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1029 08:27:30.398483       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1029 08:27:30.398490       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1029 08:27:30.398508       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 29 08:35:55 functional-985165 kubelet[4293]: E1029 08:35:55.936680    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qsb8z" podUID="ec6705c8-c6ae-4666-8c1c-60b2627ef624"
	Oct 29 08:35:57 functional-985165 kubelet[4293]: E1029 08:35:57.936358    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hsbk2" podUID="8401af77-b86b-4a82-8f58-d91f4c1c5bf8"
	Oct 29 08:36:07 functional-985165 kubelet[4293]: E1029 08:36:07.936441    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qsb8z" podUID="ec6705c8-c6ae-4666-8c1c-60b2627ef624"
	Oct 29 08:36:11 functional-985165 kubelet[4293]: E1029 08:36:11.936382    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hsbk2" podUID="8401af77-b86b-4a82-8f58-d91f4c1c5bf8"
	Oct 29 08:36:22 functional-985165 kubelet[4293]: E1029 08:36:22.936442    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qsb8z" podUID="ec6705c8-c6ae-4666-8c1c-60b2627ef624"
	Oct 29 08:36:25 functional-985165 kubelet[4293]: E1029 08:36:25.936760    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hsbk2" podUID="8401af77-b86b-4a82-8f58-d91f4c1c5bf8"
	Oct 29 08:36:35 functional-985165 kubelet[4293]: E1029 08:36:35.937025    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qsb8z" podUID="ec6705c8-c6ae-4666-8c1c-60b2627ef624"
	Oct 29 08:36:36 functional-985165 kubelet[4293]: E1029 08:36:36.936182    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hsbk2" podUID="8401af77-b86b-4a82-8f58-d91f4c1c5bf8"
	Oct 29 08:36:47 functional-985165 kubelet[4293]: E1029 08:36:47.936591    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qsb8z" podUID="ec6705c8-c6ae-4666-8c1c-60b2627ef624"
	Oct 29 08:36:48 functional-985165 kubelet[4293]: E1029 08:36:48.936306    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hsbk2" podUID="8401af77-b86b-4a82-8f58-d91f4c1c5bf8"
	Oct 29 08:37:00 functional-985165 kubelet[4293]: E1029 08:37:00.936376    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qsb8z" podUID="ec6705c8-c6ae-4666-8c1c-60b2627ef624"
	Oct 29 08:37:01 functional-985165 kubelet[4293]: E1029 08:37:01.937061    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hsbk2" podUID="8401af77-b86b-4a82-8f58-d91f4c1c5bf8"
	Oct 29 08:37:11 functional-985165 kubelet[4293]: E1029 08:37:11.936426    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qsb8z" podUID="ec6705c8-c6ae-4666-8c1c-60b2627ef624"
	Oct 29 08:37:13 functional-985165 kubelet[4293]: E1029 08:37:13.936755    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hsbk2" podUID="8401af77-b86b-4a82-8f58-d91f4c1c5bf8"
	Oct 29 08:37:23 functional-985165 kubelet[4293]: E1029 08:37:23.936320    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qsb8z" podUID="ec6705c8-c6ae-4666-8c1c-60b2627ef624"
	Oct 29 08:37:27 functional-985165 kubelet[4293]: E1029 08:37:27.938543    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hsbk2" podUID="8401af77-b86b-4a82-8f58-d91f4c1c5bf8"
	Oct 29 08:37:34 functional-985165 kubelet[4293]: E1029 08:37:34.936409    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qsb8z" podUID="ec6705c8-c6ae-4666-8c1c-60b2627ef624"
	Oct 29 08:37:40 functional-985165 kubelet[4293]: E1029 08:37:40.936908    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hsbk2" podUID="8401af77-b86b-4a82-8f58-d91f4c1c5bf8"
	Oct 29 08:37:47 functional-985165 kubelet[4293]: E1029 08:37:47.938607    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qsb8z" podUID="ec6705c8-c6ae-4666-8c1c-60b2627ef624"
	Oct 29 08:37:52 functional-985165 kubelet[4293]: E1029 08:37:52.936420    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hsbk2" podUID="8401af77-b86b-4a82-8f58-d91f4c1c5bf8"
	Oct 29 08:38:00 functional-985165 kubelet[4293]: E1029 08:38:00.936860    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qsb8z" podUID="ec6705c8-c6ae-4666-8c1c-60b2627ef624"
	Oct 29 08:38:06 functional-985165 kubelet[4293]: E1029 08:38:06.936937    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hsbk2" podUID="8401af77-b86b-4a82-8f58-d91f4c1c5bf8"
	Oct 29 08:38:13 functional-985165 kubelet[4293]: E1029 08:38:13.936601    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qsb8z" podUID="ec6705c8-c6ae-4666-8c1c-60b2627ef624"
	Oct 29 08:38:18 functional-985165 kubelet[4293]: E1029 08:38:18.936701    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hsbk2" podUID="8401af77-b86b-4a82-8f58-d91f4c1c5bf8"
	Oct 29 08:38:25 functional-985165 kubelet[4293]: E1029 08:38:25.936411    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qsb8z" podUID="ec6705c8-c6ae-4666-8c1c-60b2627ef624"
	
	
	==> kubernetes-dashboard [5a9d86231cfa3e49d56c7e49b2d0f8365c1dfeb6e644878b143a26c1bc8588f8] <==
	2025/10/29 08:28:41 Using namespace: kubernetes-dashboard
	2025/10/29 08:28:41 Using in-cluster config to connect to apiserver
	2025/10/29 08:28:41 Using secret token for csrf signing
	2025/10/29 08:28:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 08:28:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 08:28:41 Successful initial request to the apiserver, version: v1.34.1
	2025/10/29 08:28:41 Generating JWE encryption key
	2025/10/29 08:28:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 08:28:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 08:28:41 Initializing JWE encryption key from synchronized object
	2025/10/29 08:28:41 Creating in-cluster Sidecar client
	2025/10/29 08:28:41 Successful request to sidecar
	2025/10/29 08:28:41 Serving insecurely on HTTP port: 9090
	2025/10/29 08:28:41 Starting overwatch
	
	
	==> storage-provisioner [a7447ffbcb24c9759f615c68c41192567c0a7e808b41a931773caa439afee526] <==
	W1029 08:38:04.948667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:06.952450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:06.956362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:08.959963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:08.965210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:10.968913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:10.972701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:12.975328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:12.979949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:14.983134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:14.987288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:16.990854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:16.996426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:18.999600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:19.004744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:21.008291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:21.012525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:23.016193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:23.022048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:25.025741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:25.029700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:27.033360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:27.037504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:29.040661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:38:29.044985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fd56e84cd48bcb3178082a6add2f48d147ed676c0581731c3141613042cc7c66] <==
	W1029 08:27:05.336038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:07.339453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:07.343686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:09.347143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:09.352466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:11.355884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:11.360153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:13.363652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:13.367567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:15.370373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:15.373866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:17.377641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:17.381778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:19.385500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:19.389721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:21.392615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:21.398860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:23.401973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:23.405972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:25.408588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:25.413371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:27.416971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:27.421107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:29.424185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:27:29.427842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-985165 -n functional-985165
helpers_test.go:269: (dbg) Run:  kubectl --context functional-985165 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-hsbk2 hello-node-connect-7d85dfc575-qsb8z
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-985165 describe pod busybox-mount hello-node-75c85bcc94-hsbk2 hello-node-connect-7d85dfc575-qsb8z
helpers_test.go:290: (dbg) kubectl --context functional-985165 describe pod busybox-mount hello-node-75c85bcc94-hsbk2 hello-node-connect-7d85dfc575-qsb8z:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-985165/192.168.49.2
	Start Time:       Wed, 29 Oct 2025 08:28:22 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  cri-o://c13f7b16d06f27d4f37e92f806d772fab4dc508b2b2300b27bbd66ce4b6a732a
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 29 Oct 2025 08:28:25 +0000
	      Finished:     Wed, 29 Oct 2025 08:28:25 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b4k5x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-b4k5x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-985165
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 802ms (1.117s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-hsbk2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-985165/192.168.49.2
	Start Time:       Wed, 29 Oct 2025 08:28:28 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rbjlc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rbjlc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hsbk2 to functional-985165
	  Normal   Pulling    6m56s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m56s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m56s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m52s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m52s (x21 over 10m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-qsb8z
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-985165/192.168.49.2
	Start Time:       Wed, 29 Oct 2025 08:28:28 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8dmn9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8dmn9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-qsb8z to functional-985165
	  Normal   Pulling    7m9s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m9s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m9s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m52s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m39s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image load --daemon kicbase/echo-server:functional-985165 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-985165" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image load --daemon kicbase/echo-server:functional-985165 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-985165 image load --daemon kicbase/echo-server:functional-985165 --alsologtostderr: (1.047350796s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-985165" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-985165
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image load --daemon kicbase/echo-server:functional-985165 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-985165 image load --daemon kicbase/echo-server:functional-985165 --alsologtostderr: (1.037164752s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image ls
E1029 08:28:19.026403    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-985165 image ls: (2.268577132s)
functional_test.go:461: expected "kicbase/echo-server:functional-985165" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image save kicbase/echo-server:functional-985165 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1029 08:28:20.103226   42641 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:28:20.103651   42641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:28:20.103666   42641 out.go:374] Setting ErrFile to fd 2...
	I1029 08:28:20.103672   42641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:28:20.103903   42641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:28:20.104556   42641 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:28:20.104647   42641 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:28:20.105059   42641 cli_runner.go:164] Run: docker container inspect functional-985165 --format={{.State.Status}}
	I1029 08:28:20.124445   42641 ssh_runner.go:195] Run: systemctl --version
	I1029 08:28:20.124517   42641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-985165
	I1029 08:28:20.143336   42641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/functional-985165/id_rsa Username:docker}
	I1029 08:28:20.244363   42641 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1029 08:28:20.244429   42641 cache_images.go:255] Failed to load cached images for "functional-985165": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1029 08:28:20.244451   42641 cache_images.go:267] failed pushing to: functional-985165

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-985165
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image save --daemon kicbase/echo-server:functional-985165 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-985165
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-985165: exit status 1 (21.165888ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-985165

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-985165

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-985165 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-985165 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-hsbk2" [8401af77-b86b-4a82-8f58-d91f4c1c5bf8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-985165 -n functional-985165
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-29 08:38:28.682618837 +0000 UTC m=+1110.714838229
functional_test.go:1460: (dbg) Run:  kubectl --context functional-985165 describe po hello-node-75c85bcc94-hsbk2 -n default
functional_test.go:1460: (dbg) kubectl --context functional-985165 describe po hello-node-75c85bcc94-hsbk2 -n default:
Name:             hello-node-75c85bcc94-hsbk2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-985165/192.168.49.2
Start Time:       Wed, 29 Oct 2025 08:28:28 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rbjlc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rbjlc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hsbk2 to functional-985165
Normal   Pulling    6m53s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m53s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m53s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-985165 logs hello-node-75c85bcc94-hsbk2 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-985165 logs hello-node-75c85bcc94-hsbk2 -n default: exit status 1 (72.165778ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-hsbk2" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-985165 logs hello-node-75c85bcc94-hsbk2 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-985165 service --namespace=default --https --url hello-node: exit status 115 (545.997487ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31662
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-985165 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-985165 service hello-node --url --format={{.IP}}: exit status 115 (550.519662ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-985165 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-985165 service hello-node --url: exit status 115 (546.94345ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31662
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-985165 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31662
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.32s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-478107 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-478107 --output=json --user=testUser: exit status 80 (2.323182303s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"79717ca6-aab4-4d33-9790-4f5ec3345bfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-478107 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"83fedf3d-141f-430e-8f26-b45c7fd7bc43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-29T08:48:03Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"fe553a42-67c8-4039-ad6d-c2f4cf891384","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-478107 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.32s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.77s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-478107 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-478107 --output=json --user=testUser: exit status 80 (1.769009531s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bbf6dc44-2088-4bcc-a1e1-08c5ac25df9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-478107 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"ec2e2ab0-ffa7-4c44-a12e-7dede44a77d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-29T08:48:05Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"514dffc0-c68c-419e-a1bf-75e903b1a0a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-478107 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.77s)

                                                
                                    
x
+
TestPause/serial/Pause (6.54s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-470577 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-470577 --alsologtostderr -v=5: exit status 80 (1.869066738s)

                                                
                                                
-- stdout --
	* Pausing node pause-470577 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:03:11.062636  204083 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:03:11.062768  204083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:03:11.062777  204083 out.go:374] Setting ErrFile to fd 2...
	I1029 09:03:11.062781  204083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:03:11.063050  204083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:03:11.063323  204083 out.go:368] Setting JSON to false
	I1029 09:03:11.063346  204083 mustload.go:66] Loading cluster: pause-470577
	I1029 09:03:11.063716  204083 config.go:182] Loaded profile config "pause-470577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:03:11.064149  204083 cli_runner.go:164] Run: docker container inspect pause-470577 --format={{.State.Status}}
	I1029 09:03:11.084422  204083 host.go:66] Checking if "pause-470577" exists ...
	I1029 09:03:11.084708  204083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:03:11.157301  204083 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:94 SystemTime:2025-10-29 09:03:11.143189356 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:03:11.158202  204083 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-470577 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1029 09:03:11.160531  204083 out.go:179] * Pausing node pause-470577 ... 
	I1029 09:03:11.161977  204083 host.go:66] Checking if "pause-470577" exists ...
	I1029 09:03:11.162295  204083 ssh_runner.go:195] Run: systemctl --version
	I1029 09:03:11.162359  204083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470577
	I1029 09:03:11.184678  204083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/pause-470577/id_rsa Username:docker}
	I1029 09:03:11.287549  204083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:03:11.305287  204083 pause.go:52] kubelet running: true
	I1029 09:03:11.305349  204083 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:03:11.456370  204083 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:03:11.456464  204083 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:03:11.551131  204083 cri.go:89] found id: "f3e422aa25145c663153c917c76efc65f478c96247053f901e05108a0d8f5aa2"
	I1029 09:03:11.551158  204083 cri.go:89] found id: "850cb974c0fce9f271c5771918fbf2264257b76cd9e075d5544e45d28ab6dbe5"
	I1029 09:03:11.551164  204083 cri.go:89] found id: "b8fcde080ae08d40f388d6152681d0b178abcdd5cff2c6772f6bcc6980381a0a"
	I1029 09:03:11.551168  204083 cri.go:89] found id: "c3bc4e6855a3033b28fd7c26d7a35b91fff3c39be0c2aab57f86ab1f7a6f3c11"
	I1029 09:03:11.551172  204083 cri.go:89] found id: "ccd4699a211e9aa9fc943cc8b5a917cb6496ee857828ec92e00a99c4eccef775"
	I1029 09:03:11.551177  204083 cri.go:89] found id: "4f2c8636745db7dc71fc4459d44cb39b28a4ab71dcf21c9341f4b0795cde0af8"
	I1029 09:03:11.551180  204083 cri.go:89] found id: "b94585b063ecb11c8ea16b9f526487723c9e9d4fdc61153923721f26a42ef4ba"
	I1029 09:03:11.551184  204083 cri.go:89] found id: ""
	I1029 09:03:11.551233  204083 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:03:11.564645  204083 retry.go:31] will retry after 243.615953ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:03:11Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:03:11.809206  204083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:03:11.823413  204083 pause.go:52] kubelet running: false
	I1029 09:03:11.823471  204083 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:03:11.940128  204083 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:03:11.940212  204083 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:03:12.019251  204083 cri.go:89] found id: "f3e422aa25145c663153c917c76efc65f478c96247053f901e05108a0d8f5aa2"
	I1029 09:03:12.019288  204083 cri.go:89] found id: "850cb974c0fce9f271c5771918fbf2264257b76cd9e075d5544e45d28ab6dbe5"
	I1029 09:03:12.019297  204083 cri.go:89] found id: "b8fcde080ae08d40f388d6152681d0b178abcdd5cff2c6772f6bcc6980381a0a"
	I1029 09:03:12.019303  204083 cri.go:89] found id: "c3bc4e6855a3033b28fd7c26d7a35b91fff3c39be0c2aab57f86ab1f7a6f3c11"
	I1029 09:03:12.019308  204083 cri.go:89] found id: "ccd4699a211e9aa9fc943cc8b5a917cb6496ee857828ec92e00a99c4eccef775"
	I1029 09:03:12.019313  204083 cri.go:89] found id: "4f2c8636745db7dc71fc4459d44cb39b28a4ab71dcf21c9341f4b0795cde0af8"
	I1029 09:03:12.019318  204083 cri.go:89] found id: "b94585b063ecb11c8ea16b9f526487723c9e9d4fdc61153923721f26a42ef4ba"
	I1029 09:03:12.019321  204083 cri.go:89] found id: ""
	I1029 09:03:12.019416  204083 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:03:12.032715  204083 retry.go:31] will retry after 301.161592ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:03:12Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:03:12.334130  204083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:03:12.348177  204083 pause.go:52] kubelet running: false
	I1029 09:03:12.348240  204083 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:03:12.465544  204083 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:03:12.465644  204083 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:03:12.539910  204083 cri.go:89] found id: "f3e422aa25145c663153c917c76efc65f478c96247053f901e05108a0d8f5aa2"
	I1029 09:03:12.539934  204083 cri.go:89] found id: "850cb974c0fce9f271c5771918fbf2264257b76cd9e075d5544e45d28ab6dbe5"
	I1029 09:03:12.539940  204083 cri.go:89] found id: "b8fcde080ae08d40f388d6152681d0b178abcdd5cff2c6772f6bcc6980381a0a"
	I1029 09:03:12.539945  204083 cri.go:89] found id: "c3bc4e6855a3033b28fd7c26d7a35b91fff3c39be0c2aab57f86ab1f7a6f3c11"
	I1029 09:03:12.539949  204083 cri.go:89] found id: "ccd4699a211e9aa9fc943cc8b5a917cb6496ee857828ec92e00a99c4eccef775"
	I1029 09:03:12.539954  204083 cri.go:89] found id: "4f2c8636745db7dc71fc4459d44cb39b28a4ab71dcf21c9341f4b0795cde0af8"
	I1029 09:03:12.539958  204083 cri.go:89] found id: "b94585b063ecb11c8ea16b9f526487723c9e9d4fdc61153923721f26a42ef4ba"
	I1029 09:03:12.539962  204083 cri.go:89] found id: ""
	I1029 09:03:12.540038  204083 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:03:12.650732  204083 out.go:203] 
	W1029 09:03:12.714787  204083 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:03:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:03:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 09:03:12.714819  204083 out.go:285] * 
	* 
	W1029 09:03:12.721494  204083 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 09:03:12.786799  204083 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-470577 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-470577
helpers_test.go:243: (dbg) docker inspect pause-470577:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eab5fc42d7ed05aaceb17aff024f6540f6960b791c55034790141fe6ea5cb444",
	        "Created": "2025-10-29T09:01:52.493950754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 178255,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:01:53.086456256Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/eab5fc42d7ed05aaceb17aff024f6540f6960b791c55034790141fe6ea5cb444/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eab5fc42d7ed05aaceb17aff024f6540f6960b791c55034790141fe6ea5cb444/hostname",
	        "HostsPath": "/var/lib/docker/containers/eab5fc42d7ed05aaceb17aff024f6540f6960b791c55034790141fe6ea5cb444/hosts",
	        "LogPath": "/var/lib/docker/containers/eab5fc42d7ed05aaceb17aff024f6540f6960b791c55034790141fe6ea5cb444/eab5fc42d7ed05aaceb17aff024f6540f6960b791c55034790141fe6ea5cb444-json.log",
	        "Name": "/pause-470577",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-470577:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-470577",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eab5fc42d7ed05aaceb17aff024f6540f6960b791c55034790141fe6ea5cb444",
	                "LowerDir": "/var/lib/docker/overlay2/2ddce99f27a73fb6ea9135af1b6d6587c9bd39d0cbbe2d6ede861e06a0837f67-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2ddce99f27a73fb6ea9135af1b6d6587c9bd39d0cbbe2d6ede861e06a0837f67/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2ddce99f27a73fb6ea9135af1b6d6587c9bd39d0cbbe2d6ede861e06a0837f67/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2ddce99f27a73fb6ea9135af1b6d6587c9bd39d0cbbe2d6ede861e06a0837f67/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-470577",
	                "Source": "/var/lib/docker/volumes/pause-470577/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-470577",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-470577",
	                "name.minikube.sigs.k8s.io": "pause-470577",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c03619781b7ed029c71b28525d5f2168f40036e301d06a087e95e8d7245c7281",
	            "SandboxKey": "/var/run/docker/netns/c03619781b7e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-470577": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:f4:16:52:9b:44",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ea35bc3b260c5668bd6e55f3b0da1671c1f08b741be62af97f0116b0e0f02b51",
	                    "EndpointID": "f6cd0d6bd98a2d79e866710fdb324e179c65f371b9af09210430181297dfee30",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-470577",
	                        "eab5fc42d7ed"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-470577 -n pause-470577
E1029 09:03:12.893186    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-470577 -n pause-470577: exit status 2 (419.165244ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-470577 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-470577 logs -n 25: (1.631572974s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-240549 sudo systemctl status docker --all --full --no-pager                                                       │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo systemctl cat docker --no-pager                                                                       │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo cat /etc/docker/daemon.json                                                                           │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo docker system info                                                                                    │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo cri-dockerd --version                                                                                 │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo systemctl cat containerd --no-pager                                                                   │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo cat /etc/containerd/config.toml                                                                       │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo containerd config dump                                                                                │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo systemctl cat crio --no-pager                                                                         │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo crio config                                                                                           │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ delete  │ -p cilium-240549                                                                                                            │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │ 29 Oct 25 09:02 UTC │
	│ start   │ -p force-systemd-flag-699681 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-699681 │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ start   │ -p pause-470577 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-470577              │ jenkins │ v1.37.0 │ 29 Oct 25 09:03 UTC │ 29 Oct 25 09:03 UTC │
	│ start   │ -p NoKubernetes-808010 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-808010       │ jenkins │ v1.37.0 │ 29 Oct 25 09:03 UTC │                     │
	│ delete  │ -p force-systemd-env-317579                                                                                                 │ force-systemd-env-317579  │ jenkins │ v1.37.0 │ 29 Oct 25 09:03 UTC │ 29 Oct 25 09:03 UTC │
	│ start   │ -p cert-expiration-230123 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-230123    │ jenkins │ v1.37.0 │ 29 Oct 25 09:03 UTC │                     │
	│ pause   │ -p pause-470577 --alsologtostderr -v=5                                                                                      │ pause-470577              │ jenkins │ v1.37.0 │ 29 Oct 25 09:03 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:03:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:03:09.690039  203542 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:03:09.690345  203542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:03:09.690349  203542 out.go:374] Setting ErrFile to fd 2...
	I1029 09:03:09.690352  203542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:03:09.690590  203542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:03:09.691105  203542 out.go:368] Setting JSON to false
	I1029 09:03:09.692228  203542 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2738,"bootTime":1761725852,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:03:09.692285  203542 start.go:143] virtualization: kvm guest
	I1029 09:03:09.694645  203542 out.go:179] * [cert-expiration-230123] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:03:09.696372  203542 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:03:09.696381  203542 notify.go:221] Checking for updates...
	I1029 09:03:09.698757  203542 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:03:09.700066  203542 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:03:09.701390  203542 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:03:09.702641  203542 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:03:09.704025  203542 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:03:09.705805  203542 config.go:182] Loaded profile config "NoKubernetes-808010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1029 09:03:09.705892  203542 config.go:182] Loaded profile config "force-systemd-flag-699681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:03:09.706007  203542 config.go:182] Loaded profile config "pause-470577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:03:09.706102  203542 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:03:09.734800  203542 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:03:09.734957  203542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:03:09.799219  203542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-29 09:03:09.789269692 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:03:09.799357  203542 docker.go:319] overlay module found
	I1029 09:03:09.801268  203542 out.go:179] * Using the docker driver based on user configuration
	I1029 09:03:09.802407  203542 start.go:309] selected driver: docker
	I1029 09:03:09.802414  203542 start.go:930] validating driver "docker" against <nil>
	I1029 09:03:09.802427  203542 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:03:09.803229  203542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:03:09.868178  203542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-29 09:03:09.857283709 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:03:09.868314  203542 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 09:03:09.868540  203542 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1029 09:03:09.870229  203542 out.go:179] * Using Docker driver with root privileges
	I1029 09:03:09.871423  203542 cni.go:84] Creating CNI manager for ""
	I1029 09:03:09.871483  203542 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:03:09.871489  203542 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:03:09.871597  203542 start.go:353] cluster config:
	{Name:cert-expiration-230123 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-230123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:03:09.872913  203542 out.go:179] * Starting "cert-expiration-230123" primary control-plane node in "cert-expiration-230123" cluster
	I1029 09:03:09.874110  203542 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:03:09.875392  203542 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:03:09.876523  203542 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:03:09.876558  203542 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:03:09.876567  203542 cache.go:59] Caching tarball of preloaded images
	I1029 09:03:09.876625  203542 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:03:09.876659  203542 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:03:09.876668  203542 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:03:09.876764  203542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/cert-expiration-230123/config.json ...
	I1029 09:03:09.876784  203542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/cert-expiration-230123/config.json: {Name:mka3f9f7c1a7bc3f23e527343e7d6d6cd2f84459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:03:09.900936  203542 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:03:09.900946  203542 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:03:09.900960  203542 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:03:09.901000  203542 start.go:360] acquireMachinesLock for cert-expiration-230123: {Name:mk3d18b8b6520166b822ade184d069687ae67ed0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:03:09.901109  203542 start.go:364] duration metric: took 92.497µs to acquireMachinesLock for "cert-expiration-230123"
	I1029 09:03:09.901134  203542 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-230123 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-230123 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:03:09.901198  203542 start.go:125] createHost starting for "" (driver="docker")
	I1029 09:03:08.834214  201876 ssh_runner.go:195] Run: sudo systemctl stop -f kubelet
	I1029 09:03:08.863118  201876 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1029 09:03:08.863203  201876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:03:08.895012  201876 cri.go:89] found id: "bf6daae108c4d260eabf210c58dd3613603d99a12e6234058220c644dd22cc37"
	I1029 09:03:08.895040  201876 cri.go:89] found id: "f93d056ae365cb9e669737c725f1531ca6892cf7ebfc0a32a62e928b8b1001aa"
	I1029 09:03:08.895047  201876 cri.go:89] found id: "6cdfbb64f13cfc451ed731459f7287f1ae59c00e09a7fb123d621bd0e0bb6988"
	I1029 09:03:08.895052  201876 cri.go:89] found id: "1370f972f304bbc46731ed9a7dd500d7eec15e58ee082722e0559a45934d0188"
	I1029 09:03:08.895056  201876 cri.go:89] found id: ""
	W1029 09:03:08.895066  201876 kubeadm.go:839] found 4 kube-system containers to stop
	I1029 09:03:08.895075  201876 cri.go:252] Stopping containers: [bf6daae108c4d260eabf210c58dd3613603d99a12e6234058220c644dd22cc37 f93d056ae365cb9e669737c725f1531ca6892cf7ebfc0a32a62e928b8b1001aa 6cdfbb64f13cfc451ed731459f7287f1ae59c00e09a7fb123d621bd0e0bb6988 1370f972f304bbc46731ed9a7dd500d7eec15e58ee082722e0559a45934d0188]
	I1029 09:03:08.895133  201876 ssh_runner.go:195] Run: which crictl
	I1029 09:03:08.899383  201876 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 bf6daae108c4d260eabf210c58dd3613603d99a12e6234058220c644dd22cc37 f93d056ae365cb9e669737c725f1531ca6892cf7ebfc0a32a62e928b8b1001aa 6cdfbb64f13cfc451ed731459f7287f1ae59c00e09a7fb123d621bd0e0bb6988 1370f972f304bbc46731ed9a7dd500d7eec15e58ee082722e0559a45934d0188
	I1029 09:03:10.492687  201876 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 bf6daae108c4d260eabf210c58dd3613603d99a12e6234058220c644dd22cc37 f93d056ae365cb9e669737c725f1531ca6892cf7ebfc0a32a62e928b8b1001aa 6cdfbb64f13cfc451ed731459f7287f1ae59c00e09a7fb123d621bd0e0bb6988 1370f972f304bbc46731ed9a7dd500d7eec15e58ee082722e0559a45934d0188: (1.593266409s)
	I1029 09:03:10.492771  201876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:03:10.511363  201876 out.go:179]   - Kubernetes: Stopped
	I1029 09:03:09.101405  200413 addons.go:515] duration metric: took 5.301023ms for enable addons: enabled=[]
	I1029 09:03:09.101448  200413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:03:09.222782  200413 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:03:09.236414  200413 node_ready.go:35] waiting up to 6m0s for node "pause-470577" to be "Ready" ...
	I1029 09:03:09.244338  200413 node_ready.go:49] node "pause-470577" is "Ready"
	I1029 09:03:09.244366  200413 node_ready.go:38] duration metric: took 7.90925ms for node "pause-470577" to be "Ready" ...
	I1029 09:03:09.244382  200413 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:03:09.244431  200413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:03:09.258080  200413 api_server.go:72] duration metric: took 162.001273ms to wait for apiserver process to appear ...
	I1029 09:03:09.258105  200413 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:03:09.258125  200413 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:03:09.262276  200413 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:03:09.263241  200413 api_server.go:141] control plane version: v1.34.1
	I1029 09:03:09.263269  200413 api_server.go:131] duration metric: took 5.156483ms to wait for apiserver health ...
	I1029 09:03:09.263282  200413 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:03:09.266317  200413 system_pods.go:59] 7 kube-system pods found
	I1029 09:03:09.266359  200413 system_pods.go:61] "coredns-66bc5c9577-6v49k" [55856b5d-4d88-46ea-867b-fb904a23bd57] Running
	I1029 09:03:09.266366  200413 system_pods.go:61] "etcd-pause-470577" [d43e38a0-e74d-44f6-a56c-ffed2baf8b0e] Running
	I1029 09:03:09.266372  200413 system_pods.go:61] "kindnet-tkv8d" [6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9] Running
	I1029 09:03:09.266377  200413 system_pods.go:61] "kube-apiserver-pause-470577" [7a7a8174-631f-4c30-b67f-9552c2219a85] Running
	I1029 09:03:09.266382  200413 system_pods.go:61] "kube-controller-manager-pause-470577" [cb8c97e9-826d-4f1b-8533-ccf16d437083] Running
	I1029 09:03:09.266388  200413 system_pods.go:61] "kube-proxy-bqbws" [ca78ec0e-a7c1-4000-ac53-c7bad59a73f7] Running
	I1029 09:03:09.266397  200413 system_pods.go:61] "kube-scheduler-pause-470577" [35b51979-eb1d-4c32-ae92-279085d9cd3e] Running
	I1029 09:03:09.266405  200413 system_pods.go:74] duration metric: took 3.116047ms to wait for pod list to return data ...
	I1029 09:03:09.266417  200413 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:03:09.268502  200413 default_sa.go:45] found service account: "default"
	I1029 09:03:09.268523  200413 default_sa.go:55] duration metric: took 2.100627ms for default service account to be created ...
	I1029 09:03:09.268533  200413 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:03:09.271303  200413 system_pods.go:86] 7 kube-system pods found
	I1029 09:03:09.271326  200413 system_pods.go:89] "coredns-66bc5c9577-6v49k" [55856b5d-4d88-46ea-867b-fb904a23bd57] Running
	I1029 09:03:09.271332  200413 system_pods.go:89] "etcd-pause-470577" [d43e38a0-e74d-44f6-a56c-ffed2baf8b0e] Running
	I1029 09:03:09.271339  200413 system_pods.go:89] "kindnet-tkv8d" [6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9] Running
	I1029 09:03:09.271343  200413 system_pods.go:89] "kube-apiserver-pause-470577" [7a7a8174-631f-4c30-b67f-9552c2219a85] Running
	I1029 09:03:09.271346  200413 system_pods.go:89] "kube-controller-manager-pause-470577" [cb8c97e9-826d-4f1b-8533-ccf16d437083] Running
	I1029 09:03:09.271349  200413 system_pods.go:89] "kube-proxy-bqbws" [ca78ec0e-a7c1-4000-ac53-c7bad59a73f7] Running
	I1029 09:03:09.271352  200413 system_pods.go:89] "kube-scheduler-pause-470577" [35b51979-eb1d-4c32-ae92-279085d9cd3e] Running
	I1029 09:03:09.271359  200413 system_pods.go:126] duration metric: took 2.820479ms to wait for k8s-apps to be running ...
	I1029 09:03:09.271368  200413 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:03:09.271406  200413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:03:09.284706  200413 system_svc.go:56] duration metric: took 13.324717ms WaitForService to wait for kubelet
	I1029 09:03:09.284751  200413 kubeadm.go:587] duration metric: took 188.676898ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:03:09.284785  200413 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:03:09.287541  200413 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 09:03:09.287585  200413 node_conditions.go:123] node cpu capacity is 8
	I1029 09:03:09.287601  200413 node_conditions.go:105] duration metric: took 2.80973ms to run NodePressure ...
	I1029 09:03:09.287615  200413 start.go:242] waiting for startup goroutines ...
	I1029 09:03:09.287624  200413 start.go:247] waiting for cluster config update ...
	I1029 09:03:09.287633  200413 start.go:256] writing updated cluster config ...
	I1029 09:03:09.287982  200413 ssh_runner.go:195] Run: rm -f paused
	I1029 09:03:09.292031  200413 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:03:09.292788  200413 kapi.go:59] client config for pause-470577: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-3727/.minikube/profiles/pause-470577/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-3727/.minikube/profiles/pause-470577/client.key", CAFile:"/home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 09:03:09.295797  200413 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6v49k" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:09.300355  200413 pod_ready.go:94] pod "coredns-66bc5c9577-6v49k" is "Ready"
	I1029 09:03:09.300377  200413 pod_ready.go:86] duration metric: took 4.559664ms for pod "coredns-66bc5c9577-6v49k" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:09.302308  200413 pod_ready.go:83] waiting for pod "etcd-pause-470577" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:09.306247  200413 pod_ready.go:94] pod "etcd-pause-470577" is "Ready"
	I1029 09:03:09.306272  200413 pod_ready.go:86] duration metric: took 3.940138ms for pod "etcd-pause-470577" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:09.308378  200413 pod_ready.go:83] waiting for pod "kube-apiserver-pause-470577" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:09.311897  200413 pod_ready.go:94] pod "kube-apiserver-pause-470577" is "Ready"
	I1029 09:03:09.311920  200413 pod_ready.go:86] duration metric: took 3.521562ms for pod "kube-apiserver-pause-470577" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:09.313835  200413 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-470577" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:09.696283  200413 pod_ready.go:94] pod "kube-controller-manager-pause-470577" is "Ready"
	I1029 09:03:09.696306  200413 pod_ready.go:86] duration metric: took 382.452994ms for pod "kube-controller-manager-pause-470577" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:09.896551  200413 pod_ready.go:83] waiting for pod "kube-proxy-bqbws" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:10.296380  200413 pod_ready.go:94] pod "kube-proxy-bqbws" is "Ready"
	I1029 09:03:10.296408  200413 pod_ready.go:86] duration metric: took 399.830991ms for pod "kube-proxy-bqbws" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:10.496920  200413 pod_ready.go:83] waiting for pod "kube-scheduler-pause-470577" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:10.896684  200413 pod_ready.go:94] pod "kube-scheduler-pause-470577" is "Ready"
	I1029 09:03:10.896715  200413 pod_ready.go:86] duration metric: took 399.763639ms for pod "kube-scheduler-pause-470577" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:10.896728  200413 pod_ready.go:40] duration metric: took 1.604660801s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:03:10.955424  200413 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:03:10.957521  200413 out.go:179] * Done! kubectl is now configured to use "pause-470577" cluster and "default" namespace by default
	I1029 09:03:10.513337  201876 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:03:10.554886  201876 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:03:10.560625  201876 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:03:10.560699  201876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:03:10.570964  201876 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:03:10.571020  201876 start.go:496] detecting cgroup driver to use...
	I1029 09:03:10.571059  201876 detect.go:190] detected "systemd" cgroup driver on host os
	I1029 09:03:10.571129  201876 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:03:10.588841  201876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:03:10.604127  201876 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:03:10.604187  201876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:03:10.621069  201876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:03:10.638493  201876 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:03:10.750475  201876 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:03:10.844883  201876 docker.go:234] disabling docker service ...
	I1029 09:03:10.844952  201876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:03:10.860791  201876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:03:10.876008  201876 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:03:11.005796  201876 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:03:11.140388  201876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:03:11.157964  201876 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:03:11.174266  201876 download.go:108] Downloading: https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm.sha1 -> /home/jenkins/minikube-integration/21800-3727/.minikube/cache/linux/amd64/v0.0.0/kubeadm
	I1029 09:03:08.458468  198407 out.go:252]   - Generating certificates and keys ...
	I1029 09:03:08.458591  198407 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1029 09:03:08.458687  198407 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1029 09:03:08.694528  198407 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1029 09:03:09.240939  198407 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1029 09:03:09.456210  198407 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1029 09:03:09.573931  198407 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1029 09:03:09.695055  198407 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1029 09:03:09.695246  198407 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-699681 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1029 09:03:09.880899  198407 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1029 09:03:09.881130  198407 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-699681 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1029 09:03:10.433817  198407 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1029 09:03:10.670727  198407 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 09:03:10.867321  198407 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 09:03:10.867499  198407 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 09:03:11.023436  198407 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 09:03:11.243787  198407 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1029 09:03:12.101035  198407 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 09:03:12.566689  198407 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 09:03:12.847259  198407 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 09:03:12.848974  198407 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 09:03:12.857625  198407 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 09:03:12.880965  198407 out.go:252]   - Booting up control plane ...
	I1029 09:03:12.881133  198407 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 09:03:12.881249  198407 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 09:03:12.881346  198407 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 09:03:12.881488  198407 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 09:03:12.881609  198407 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1029 09:03:12.889173  198407 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1029 09:03:12.889763  198407 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 09:03:12.889826  198407 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 09:03:13.019255  198407 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1029 09:03:13.019472  198407 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.664754627Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.665643807Z" level=info msg="Conmon does support the --sync option"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.665670452Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.665685329Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.66647355Z" level=info msg="Conmon does support the --sync option"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.666492145Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.671374632Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.671405205Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.672009363Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.67246151Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.672525898Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.678285147Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.727606235Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-6v49k Namespace:kube-system ID:7a4da8e8c7196bcdcb5359b75fdec570ea4472b216b4318ff17932bae94cb4fc UID:55856b5d-4d88-46ea-867b-fb904a23bd57 NetNS:/var/run/netns/c13cea3f-1c15-4233-8df4-8a01b409fc9b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ac80}] Aliases:map[]}"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.727894471Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-6v49k for CNI network kindnet (type=ptp)"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728516688Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728547912Z" level=info msg="Starting seccomp notifier watcher"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728613701Z" level=info msg="Create NRI interface"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728715099Z" level=info msg="built-in NRI default validator is disabled"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728731443Z" level=info msg="runtime interface created"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728741284Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728746082Z" level=info msg="runtime interface starting up..."
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728751092Z" level=info msg="starting plugins..."
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728762988Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.729086296Z" level=info msg="No systemd watchdog enabled"
	Oct 29 09:03:07 pause-470577 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	f3e422aa25145       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago       Running             coredns                   0                   7a4da8e8c7196       coredns-66bc5c9577-6v49k               kube-system
	850cb974c0fce       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   54 seconds ago       Running             kube-proxy                0                   290cccc322a17       kube-proxy-bqbws                       kube-system
	b8fcde080ae08       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   54 seconds ago       Running             kindnet-cni               0                   80f063baa9f44       kindnet-tkv8d                          kube-system
	c3bc4e6855a30       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Running             etcd                      0                   c26af36470cd1       etcd-pause-470577                      kube-system
	ccd4699a211e9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Running             kube-controller-manager   0                   3970681cce0fd       kube-controller-manager-pause-470577   kube-system
	4f2c8636745db       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Running             kube-scheduler            0                   645a44e386292       kube-scheduler-pause-470577            kube-system
	b94585b063ecb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Running             kube-apiserver            0                   bd0e0755894da       kube-apiserver-pause-470577            kube-system
	
	
	==> coredns [f3e422aa25145c663153c917c76efc65f478c96247053f901e05108a0d8f5aa2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55930 - 49272 "HINFO IN 8875921612014454862.4833462212965654197. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.110871398s
	
	
	==> describe nodes <==
	Name:               pause-470577
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-470577
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=pause-470577
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_02_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:02:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-470577
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:03:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:03:00 +0000   Wed, 29 Oct 2025 09:02:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:03:00 +0000   Wed, 29 Oct 2025 09:02:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:03:00 +0000   Wed, 29 Oct 2025 09:02:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:03:00 +0000   Wed, 29 Oct 2025 09:03:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-470577
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                329e3ac1-f6c5-46e9-9bcc-483df219274f
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-6v49k                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     54s
	  kube-system                 etcd-pause-470577                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         60s
	  kube-system                 kindnet-tkv8d                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-pause-470577             250m (3%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-pause-470577    200m (2%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-bqbws                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-pause-470577             100m (1%)     0 (0%)      0 (0%)           0 (0%)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 54s   kube-proxy       
	  Normal  Starting                 60s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s   kubelet          Node pause-470577 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s   kubelet          Node pause-470577 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s   kubelet          Node pause-470577 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s   node-controller  Node pause-470577 event: Registered Node pause-470577 in Controller
	  Normal  NodeReady                14s   kubelet          Node pause-470577 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.101648] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029373] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.989088] kauditd_printk_skb: 47 callbacks suppressed
	[Oct29 08:23] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.056844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.023928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000035] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.023834] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +2.047751] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +4.031591] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +8.063160] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[ +16.382216] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 08:24] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	
	
	==> etcd [c3bc4e6855a3033b28fd7c26d7a35b91fff3c39be0c2aab57f86ab1f7a6f3c11] <==
	{"level":"warn","ts":"2025-10-29T09:02:11.114241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.120507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.136673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.151533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.166111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.182256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.194083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.205578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.218143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.250350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.264273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.279735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.286883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.298906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.310815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.324748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.333324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.341541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.349429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.358594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.367015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.390839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.398368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.409081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.469700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33814","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:03:14 up 45 min,  0 user,  load average: 3.75, 2.05, 1.40
	Linux pause-470577 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b8fcde080ae08d40f388d6152681d0b178abcdd5cff2c6772f6bcc6980381a0a] <==
	I1029 09:02:20.338126       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:02:20.338604       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:02:20.338808       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:02:20.338849       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:02:20.338872       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:02:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:02:20.623081       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:02:20.623114       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:02:20.623124       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:02:20.623837       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 09:02:50.623722       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 09:02:50.623725       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1029 09:02:50.623726       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1029 09:02:50.624186       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1029 09:02:52.223292       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:02:52.223326       1 metrics.go:72] Registering metrics
	I1029 09:02:52.223408       1 controller.go:711] "Syncing nftables rules"
	I1029 09:03:00.629121       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:03:00.629179       1 main.go:301] handling current node
	I1029 09:03:10.631187       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:03:10.631249       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b94585b063ecb11c8ea16b9f526487723c9e9d4fdc61153923721f26a42ef4ba] <==
	E1029 09:02:12.190329       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1029 09:02:12.234902       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:02:12.247959       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:02:12.250685       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1029 09:02:12.259836       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:02:12.263408       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:02:12.272696       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:02:13.030146       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1029 09:02:13.033613       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1029 09:02:13.033634       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:02:13.583565       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:02:13.628166       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:02:13.737260       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1029 09:02:13.746148       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1029 09:02:13.747566       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:02:13.755072       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:02:14.090298       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:02:14.585956       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:02:14.595389       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1029 09:02:14.602405       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 09:02:19.094264       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1029 09:02:19.094265       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1029 09:02:19.945651       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:02:19.997154       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:02:20.002178       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [ccd4699a211e9aa9fc943cc8b5a917cb6496ee857828ec92e00a99c4eccef775] <==
	I1029 09:02:19.089795       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:02:19.089852       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:02:19.089858       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:02:19.090947       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 09:02:19.090979       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:02:19.091042       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:02:19.091142       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 09:02:19.091164       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 09:02:19.091275       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 09:02:19.091308       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1029 09:02:19.091183       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:02:19.091193       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 09:02:19.091191       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1029 09:02:19.091836       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:02:19.091856       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 09:02:19.091865       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 09:02:19.092673       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1029 09:02:19.096825       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:02:19.097942       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:02:19.099222       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 09:02:19.105279       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:02:19.112054       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:02:19.117700       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 09:02:19.119305       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:03:04.047772       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [850cb974c0fce9f271c5771918fbf2264257b76cd9e075d5544e45d28ab6dbe5] <==
	I1029 09:02:20.228755       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:02:20.324903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:02:20.426443       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:02:20.426568       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1029 09:02:20.426718       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:02:20.456432       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:02:20.456617       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:02:20.463275       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:02:20.463949       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:02:20.464088       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:02:20.466615       1 config.go:200] "Starting service config controller"
	I1029 09:02:20.466640       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:02:20.466699       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:02:20.466706       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:02:20.466751       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:02:20.466756       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:02:20.468017       1 config.go:309] "Starting node config controller"
	I1029 09:02:20.468041       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:02:20.566821       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 09:02:20.566869       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:02:20.566917       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:02:20.568196       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [4f2c8636745db7dc71fc4459d44cb39b28a4ab71dcf21c9341f4b0795cde0af8] <==
	E1029 09:02:12.186622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 09:02:12.186738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 09:02:12.186847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 09:02:12.186946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 09:02:12.187055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 09:02:12.187170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 09:02:12.187292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 09:02:12.187410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 09:02:12.187510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:02:12.187636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 09:02:12.187779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 09:02:12.188576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:02:12.191147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 09:02:13.020798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 09:02:13.034051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1029 09:02:13.111552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:02:13.147119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 09:02:13.147244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 09:02:13.167971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 09:02:13.173242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 09:02:13.311363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 09:02:13.317024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 09:02:13.347055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 09:02:13.357966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1029 09:02:15.966501       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:02:19 pause-470577 kubelet[1326]: I1029 09:02:19.168548    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca78ec0e-a7c1-4000-ac53-c7bad59a73f7-kube-proxy\") pod \"kube-proxy-bqbws\" (UID: \"ca78ec0e-a7c1-4000-ac53-c7bad59a73f7\") " pod="kube-system/kube-proxy-bqbws"
	Oct 29 09:02:19 pause-470577 kubelet[1326]: I1029 09:02:19.168575    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2w55\" (UniqueName: \"kubernetes.io/projected/ca78ec0e-a7c1-4000-ac53-c7bad59a73f7-kube-api-access-d2w55\") pod \"kube-proxy-bqbws\" (UID: \"ca78ec0e-a7c1-4000-ac53-c7bad59a73f7\") " pod="kube-system/kube-proxy-bqbws"
	Oct 29 09:02:19 pause-470577 kubelet[1326]: I1029 09:02:19.168599    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9-cni-cfg\") pod \"kindnet-tkv8d\" (UID: \"6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9\") " pod="kube-system/kindnet-tkv8d"
	Oct 29 09:02:19 pause-470577 kubelet[1326]: I1029 09:02:19.168624    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnzn9\" (UniqueName: \"kubernetes.io/projected/6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9-kube-api-access-vnzn9\") pod \"kindnet-tkv8d\" (UID: \"6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9\") " pod="kube-system/kindnet-tkv8d"
	Oct 29 09:02:19 pause-470577 kubelet[1326]: I1029 09:02:19.168667    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca78ec0e-a7c1-4000-ac53-c7bad59a73f7-xtables-lock\") pod \"kube-proxy-bqbws\" (UID: \"ca78ec0e-a7c1-4000-ac53-c7bad59a73f7\") " pod="kube-system/kube-proxy-bqbws"
	Oct 29 09:02:19 pause-470577 kubelet[1326]: E1029 09:02:19.277095    1326 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 29 09:02:19 pause-470577 kubelet[1326]: E1029 09:02:19.277138    1326 projected.go:196] Error preparing data for projected volume kube-api-access-d2w55 for pod kube-system/kube-proxy-bqbws: configmap "kube-root-ca.crt" not found
	Oct 29 09:02:19 pause-470577 kubelet[1326]: E1029 09:02:19.277098    1326 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 29 09:02:19 pause-470577 kubelet[1326]: E1029 09:02:19.277221    1326 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ca78ec0e-a7c1-4000-ac53-c7bad59a73f7-kube-api-access-d2w55 podName:ca78ec0e-a7c1-4000-ac53-c7bad59a73f7 nodeName:}" failed. No retries permitted until 2025-10-29 09:02:19.777192065 +0000 UTC m=+5.420460601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d2w55" (UniqueName: "kubernetes.io/projected/ca78ec0e-a7c1-4000-ac53-c7bad59a73f7-kube-api-access-d2w55") pod "kube-proxy-bqbws" (UID: "ca78ec0e-a7c1-4000-ac53-c7bad59a73f7") : configmap "kube-root-ca.crt" not found
	Oct 29 09:02:19 pause-470577 kubelet[1326]: E1029 09:02:19.277226    1326 projected.go:196] Error preparing data for projected volume kube-api-access-vnzn9 for pod kube-system/kindnet-tkv8d: configmap "kube-root-ca.crt" not found
	Oct 29 09:02:19 pause-470577 kubelet[1326]: E1029 09:02:19.277287    1326 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9-kube-api-access-vnzn9 podName:6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9 nodeName:}" failed. No retries permitted until 2025-10-29 09:02:19.777267366 +0000 UTC m=+5.420535905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vnzn9" (UniqueName: "kubernetes.io/projected/6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9-kube-api-access-vnzn9") pod "kindnet-tkv8d" (UID: "6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9") : configmap "kube-root-ca.crt" not found
	Oct 29 09:02:20 pause-470577 kubelet[1326]: I1029 09:02:20.536776    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tkv8d" podStartSLOduration=1.536754414 podStartE2EDuration="1.536754414s" podCreationTimestamp="2025-10-29 09:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:02:20.523839227 +0000 UTC m=+6.167107784" watchObservedRunningTime="2025-10-29 09:02:20.536754414 +0000 UTC m=+6.180022973"
	Oct 29 09:02:20 pause-470577 kubelet[1326]: I1029 09:02:20.564601    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bqbws" podStartSLOduration=1.564572809 podStartE2EDuration="1.564572809s" podCreationTimestamp="2025-10-29 09:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:02:20.564302005 +0000 UTC m=+6.207570562" watchObservedRunningTime="2025-10-29 09:02:20.564572809 +0000 UTC m=+6.207841364"
	Oct 29 09:03:00 pause-470577 kubelet[1326]: I1029 09:03:00.839413    1326 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 29 09:03:00 pause-470577 kubelet[1326]: I1029 09:03:00.974948    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55856b5d-4d88-46ea-867b-fb904a23bd57-config-volume\") pod \"coredns-66bc5c9577-6v49k\" (UID: \"55856b5d-4d88-46ea-867b-fb904a23bd57\") " pod="kube-system/coredns-66bc5c9577-6v49k"
	Oct 29 09:03:00 pause-470577 kubelet[1326]: I1029 09:03:00.975025    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94bxg\" (UniqueName: \"kubernetes.io/projected/55856b5d-4d88-46ea-867b-fb904a23bd57-kube-api-access-94bxg\") pod \"coredns-66bc5c9577-6v49k\" (UID: \"55856b5d-4d88-46ea-867b-fb904a23bd57\") " pod="kube-system/coredns-66bc5c9577-6v49k"
	Oct 29 09:03:01 pause-470577 kubelet[1326]: I1029 09:03:01.623754    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6v49k" podStartSLOduration=41.623731797 podStartE2EDuration="41.623731797s" podCreationTimestamp="2025-10-29 09:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:03:01.608715445 +0000 UTC m=+47.251984003" watchObservedRunningTime="2025-10-29 09:03:01.623731797 +0000 UTC m=+47.267000355"
	Oct 29 09:03:07 pause-470577 kubelet[1326]: W1029 09:03:07.609595    1326 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 29 09:03:07 pause-470577 kubelet[1326]: E1029 09:03:07.609708    1326 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 29 09:03:07 pause-470577 kubelet[1326]: E1029 09:03:07.609762    1326 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 29 09:03:07 pause-470577 kubelet[1326]: E1029 09:03:07.609776    1326 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 29 09:03:11 pause-470577 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:03:11 pause-470577 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:03:11 pause-470577 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 29 09:03:11 pause-470577 systemd[1]: kubelet.service: Consumed 2.295s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-470577 -n pause-470577
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-470577 -n pause-470577: exit status 2 (411.099575ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-470577 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-470577
helpers_test.go:243: (dbg) docker inspect pause-470577:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eab5fc42d7ed05aaceb17aff024f6540f6960b791c55034790141fe6ea5cb444",
	        "Created": "2025-10-29T09:01:52.493950754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 178255,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:01:53.086456256Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/eab5fc42d7ed05aaceb17aff024f6540f6960b791c55034790141fe6ea5cb444/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eab5fc42d7ed05aaceb17aff024f6540f6960b791c55034790141fe6ea5cb444/hostname",
	        "HostsPath": "/var/lib/docker/containers/eab5fc42d7ed05aaceb17aff024f6540f6960b791c55034790141fe6ea5cb444/hosts",
	        "LogPath": "/var/lib/docker/containers/eab5fc42d7ed05aaceb17aff024f6540f6960b791c55034790141fe6ea5cb444/eab5fc42d7ed05aaceb17aff024f6540f6960b791c55034790141fe6ea5cb444-json.log",
	        "Name": "/pause-470577",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-470577:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-470577",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eab5fc42d7ed05aaceb17aff024f6540f6960b791c55034790141fe6ea5cb444",
	                "LowerDir": "/var/lib/docker/overlay2/2ddce99f27a73fb6ea9135af1b6d6587c9bd39d0cbbe2d6ede861e06a0837f67-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2ddce99f27a73fb6ea9135af1b6d6587c9bd39d0cbbe2d6ede861e06a0837f67/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2ddce99f27a73fb6ea9135af1b6d6587c9bd39d0cbbe2d6ede861e06a0837f67/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2ddce99f27a73fb6ea9135af1b6d6587c9bd39d0cbbe2d6ede861e06a0837f67/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-470577",
	                "Source": "/var/lib/docker/volumes/pause-470577/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-470577",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-470577",
	                "name.minikube.sigs.k8s.io": "pause-470577",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c03619781b7ed029c71b28525d5f2168f40036e301d06a087e95e8d7245c7281",
	            "SandboxKey": "/var/run/docker/netns/c03619781b7e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-470577": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:f4:16:52:9b:44",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ea35bc3b260c5668bd6e55f3b0da1671c1f08b741be62af97f0116b0e0f02b51",
	                    "EndpointID": "f6cd0d6bd98a2d79e866710fdb324e179c65f371b9af09210430181297dfee30",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-470577",
	                        "eab5fc42d7ed"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-470577 -n pause-470577
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-470577 -n pause-470577: exit status 2 (464.67463ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-470577 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-470577 logs -n 25: (1.138767288s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-240549 sudo systemctl cat docker --no-pager                                                                       │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo cat /etc/docker/daemon.json                                                                           │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo docker system info                                                                                    │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo cri-dockerd --version                                                                                 │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo systemctl cat containerd --no-pager                                                                   │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo cat /etc/containerd/config.toml                                                                       │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo containerd config dump                                                                                │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo systemctl cat crio --no-pager                                                                         │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ ssh     │ -p cilium-240549 sudo crio config                                                                                           │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ delete  │ -p cilium-240549                                                                                                            │ cilium-240549             │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │ 29 Oct 25 09:02 UTC │
	│ start   │ -p force-systemd-flag-699681 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-699681 │ jenkins │ v1.37.0 │ 29 Oct 25 09:02 UTC │                     │
	│ start   │ -p pause-470577 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-470577              │ jenkins │ v1.37.0 │ 29 Oct 25 09:03 UTC │ 29 Oct 25 09:03 UTC │
	│ start   │ -p NoKubernetes-808010 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-808010       │ jenkins │ v1.37.0 │ 29 Oct 25 09:03 UTC │ 29 Oct 25 09:03 UTC │
	│ delete  │ -p force-systemd-env-317579                                                                                                 │ force-systemd-env-317579  │ jenkins │ v1.37.0 │ 29 Oct 25 09:03 UTC │ 29 Oct 25 09:03 UTC │
	│ start   │ -p cert-expiration-230123 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-230123    │ jenkins │ v1.37.0 │ 29 Oct 25 09:03 UTC │                     │
	│ pause   │ -p pause-470577 --alsologtostderr -v=5                                                                                      │ pause-470577              │ jenkins │ v1.37.0 │ 29 Oct 25 09:03 UTC │                     │
	│ delete  │ -p NoKubernetes-808010                                                                                                      │ NoKubernetes-808010       │ jenkins │ v1.37.0 │ 29 Oct 25 09:03 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:03:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:03:09.690039  203542 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:03:09.690345  203542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:03:09.690349  203542 out.go:374] Setting ErrFile to fd 2...
	I1029 09:03:09.690352  203542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:03:09.690590  203542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:03:09.691105  203542 out.go:368] Setting JSON to false
	I1029 09:03:09.692228  203542 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2738,"bootTime":1761725852,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:03:09.692285  203542 start.go:143] virtualization: kvm guest
	I1029 09:03:09.694645  203542 out.go:179] * [cert-expiration-230123] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:03:09.696372  203542 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:03:09.696381  203542 notify.go:221] Checking for updates...
	I1029 09:03:09.698757  203542 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:03:09.700066  203542 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:03:09.701390  203542 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:03:09.702641  203542 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:03:09.704025  203542 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:03:09.705805  203542 config.go:182] Loaded profile config "NoKubernetes-808010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1029 09:03:09.705892  203542 config.go:182] Loaded profile config "force-systemd-flag-699681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:03:09.706007  203542 config.go:182] Loaded profile config "pause-470577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:03:09.706102  203542 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:03:09.734800  203542 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:03:09.734957  203542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:03:09.799219  203542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-29 09:03:09.789269692 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:03:09.799357  203542 docker.go:319] overlay module found
	I1029 09:03:09.801268  203542 out.go:179] * Using the docker driver based on user configuration
	I1029 09:03:09.802407  203542 start.go:309] selected driver: docker
	I1029 09:03:09.802414  203542 start.go:930] validating driver "docker" against <nil>
	I1029 09:03:09.802427  203542 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:03:09.803229  203542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:03:09.868178  203542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-29 09:03:09.857283709 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:03:09.868314  203542 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 09:03:09.868540  203542 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1029 09:03:09.870229  203542 out.go:179] * Using Docker driver with root privileges
	I1029 09:03:09.871423  203542 cni.go:84] Creating CNI manager for ""
	I1029 09:03:09.871483  203542 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:03:09.871489  203542 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:03:09.871597  203542 start.go:353] cluster config:
	{Name:cert-expiration-230123 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-230123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:03:09.872913  203542 out.go:179] * Starting "cert-expiration-230123" primary control-plane node in "cert-expiration-230123" cluster
	I1029 09:03:09.874110  203542 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:03:09.875392  203542 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:03:09.876523  203542 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:03:09.876558  203542 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:03:09.876567  203542 cache.go:59] Caching tarball of preloaded images
	I1029 09:03:09.876625  203542 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:03:09.876659  203542 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:03:09.876668  203542 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:03:09.876764  203542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/cert-expiration-230123/config.json ...
	I1029 09:03:09.876784  203542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/cert-expiration-230123/config.json: {Name:mka3f9f7c1a7bc3f23e527343e7d6d6cd2f84459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:03:09.900936  203542 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:03:09.900946  203542 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:03:09.900960  203542 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:03:09.901000  203542 start.go:360] acquireMachinesLock for cert-expiration-230123: {Name:mk3d18b8b6520166b822ade184d069687ae67ed0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:03:09.901109  203542 start.go:364] duration metric: took 92.497µs to acquireMachinesLock for "cert-expiration-230123"
	I1029 09:03:09.901134  203542 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-230123 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-230123 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:03:09.901198  203542 start.go:125] createHost starting for "" (driver="docker")
	I1029 09:03:08.834214  201876 ssh_runner.go:195] Run: sudo systemctl stop -f kubelet
	I1029 09:03:08.863118  201876 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1029 09:03:08.863203  201876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:03:08.895012  201876 cri.go:89] found id: "bf6daae108c4d260eabf210c58dd3613603d99a12e6234058220c644dd22cc37"
	I1029 09:03:08.895040  201876 cri.go:89] found id: "f93d056ae365cb9e669737c725f1531ca6892cf7ebfc0a32a62e928b8b1001aa"
	I1029 09:03:08.895047  201876 cri.go:89] found id: "6cdfbb64f13cfc451ed731459f7287f1ae59c00e09a7fb123d621bd0e0bb6988"
	I1029 09:03:08.895052  201876 cri.go:89] found id: "1370f972f304bbc46731ed9a7dd500d7eec15e58ee082722e0559a45934d0188"
	I1029 09:03:08.895056  201876 cri.go:89] found id: ""
	W1029 09:03:08.895066  201876 kubeadm.go:839] found 4 kube-system containers to stop
	I1029 09:03:08.895075  201876 cri.go:252] Stopping containers: [bf6daae108c4d260eabf210c58dd3613603d99a12e6234058220c644dd22cc37 f93d056ae365cb9e669737c725f1531ca6892cf7ebfc0a32a62e928b8b1001aa 6cdfbb64f13cfc451ed731459f7287f1ae59c00e09a7fb123d621bd0e0bb6988 1370f972f304bbc46731ed9a7dd500d7eec15e58ee082722e0559a45934d0188]
	I1029 09:03:08.895133  201876 ssh_runner.go:195] Run: which crictl
	I1029 09:03:08.899383  201876 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 bf6daae108c4d260eabf210c58dd3613603d99a12e6234058220c644dd22cc37 f93d056ae365cb9e669737c725f1531ca6892cf7ebfc0a32a62e928b8b1001aa 6cdfbb64f13cfc451ed731459f7287f1ae59c00e09a7fb123d621bd0e0bb6988 1370f972f304bbc46731ed9a7dd500d7eec15e58ee082722e0559a45934d0188
	I1029 09:03:10.492687  201876 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 bf6daae108c4d260eabf210c58dd3613603d99a12e6234058220c644dd22cc37 f93d056ae365cb9e669737c725f1531ca6892cf7ebfc0a32a62e928b8b1001aa 6cdfbb64f13cfc451ed731459f7287f1ae59c00e09a7fb123d621bd0e0bb6988 1370f972f304bbc46731ed9a7dd500d7eec15e58ee082722e0559a45934d0188: (1.593266409s)
	I1029 09:03:10.492771  201876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:03:10.511363  201876 out.go:179]   - Kubernetes: Stopped
	I1029 09:03:09.101405  200413 addons.go:515] duration metric: took 5.301023ms for enable addons: enabled=[]
	I1029 09:03:09.101448  200413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:03:09.222782  200413 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:03:09.236414  200413 node_ready.go:35] waiting up to 6m0s for node "pause-470577" to be "Ready" ...
	I1029 09:03:09.244338  200413 node_ready.go:49] node "pause-470577" is "Ready"
	I1029 09:03:09.244366  200413 node_ready.go:38] duration metric: took 7.90925ms for node "pause-470577" to be "Ready" ...
	I1029 09:03:09.244382  200413 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:03:09.244431  200413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:03:09.258080  200413 api_server.go:72] duration metric: took 162.001273ms to wait for apiserver process to appear ...
	I1029 09:03:09.258105  200413 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:03:09.258125  200413 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:03:09.262276  200413 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:03:09.263241  200413 api_server.go:141] control plane version: v1.34.1
	I1029 09:03:09.263269  200413 api_server.go:131] duration metric: took 5.156483ms to wait for apiserver health ...
	I1029 09:03:09.263282  200413 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:03:09.266317  200413 system_pods.go:59] 7 kube-system pods found
	I1029 09:03:09.266359  200413 system_pods.go:61] "coredns-66bc5c9577-6v49k" [55856b5d-4d88-46ea-867b-fb904a23bd57] Running
	I1029 09:03:09.266366  200413 system_pods.go:61] "etcd-pause-470577" [d43e38a0-e74d-44f6-a56c-ffed2baf8b0e] Running
	I1029 09:03:09.266372  200413 system_pods.go:61] "kindnet-tkv8d" [6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9] Running
	I1029 09:03:09.266377  200413 system_pods.go:61] "kube-apiserver-pause-470577" [7a7a8174-631f-4c30-b67f-9552c2219a85] Running
	I1029 09:03:09.266382  200413 system_pods.go:61] "kube-controller-manager-pause-470577" [cb8c97e9-826d-4f1b-8533-ccf16d437083] Running
	I1029 09:03:09.266388  200413 system_pods.go:61] "kube-proxy-bqbws" [ca78ec0e-a7c1-4000-ac53-c7bad59a73f7] Running
	I1029 09:03:09.266397  200413 system_pods.go:61] "kube-scheduler-pause-470577" [35b51979-eb1d-4c32-ae92-279085d9cd3e] Running
	I1029 09:03:09.266405  200413 system_pods.go:74] duration metric: took 3.116047ms to wait for pod list to return data ...
	I1029 09:03:09.266417  200413 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:03:09.268502  200413 default_sa.go:45] found service account: "default"
	I1029 09:03:09.268523  200413 default_sa.go:55] duration metric: took 2.100627ms for default service account to be created ...
	I1029 09:03:09.268533  200413 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:03:09.271303  200413 system_pods.go:86] 7 kube-system pods found
	I1029 09:03:09.271326  200413 system_pods.go:89] "coredns-66bc5c9577-6v49k" [55856b5d-4d88-46ea-867b-fb904a23bd57] Running
	I1029 09:03:09.271332  200413 system_pods.go:89] "etcd-pause-470577" [d43e38a0-e74d-44f6-a56c-ffed2baf8b0e] Running
	I1029 09:03:09.271339  200413 system_pods.go:89] "kindnet-tkv8d" [6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9] Running
	I1029 09:03:09.271343  200413 system_pods.go:89] "kube-apiserver-pause-470577" [7a7a8174-631f-4c30-b67f-9552c2219a85] Running
	I1029 09:03:09.271346  200413 system_pods.go:89] "kube-controller-manager-pause-470577" [cb8c97e9-826d-4f1b-8533-ccf16d437083] Running
	I1029 09:03:09.271349  200413 system_pods.go:89] "kube-proxy-bqbws" [ca78ec0e-a7c1-4000-ac53-c7bad59a73f7] Running
	I1029 09:03:09.271352  200413 system_pods.go:89] "kube-scheduler-pause-470577" [35b51979-eb1d-4c32-ae92-279085d9cd3e] Running
	I1029 09:03:09.271359  200413 system_pods.go:126] duration metric: took 2.820479ms to wait for k8s-apps to be running ...
	I1029 09:03:09.271368  200413 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:03:09.271406  200413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:03:09.284706  200413 system_svc.go:56] duration metric: took 13.324717ms WaitForService to wait for kubelet
	I1029 09:03:09.284751  200413 kubeadm.go:587] duration metric: took 188.676898ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:03:09.284785  200413 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:03:09.287541  200413 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 09:03:09.287585  200413 node_conditions.go:123] node cpu capacity is 8
	I1029 09:03:09.287601  200413 node_conditions.go:105] duration metric: took 2.80973ms to run NodePressure ...
	I1029 09:03:09.287615  200413 start.go:242] waiting for startup goroutines ...
	I1029 09:03:09.287624  200413 start.go:247] waiting for cluster config update ...
	I1029 09:03:09.287633  200413 start.go:256] writing updated cluster config ...
	I1029 09:03:09.287982  200413 ssh_runner.go:195] Run: rm -f paused
	I1029 09:03:09.292031  200413 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:03:09.292788  200413 kapi.go:59] client config for pause-470577: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-3727/.minikube/profiles/pause-470577/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-3727/.minikube/profiles/pause-470577/client.key", CAFile:"/home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 09:03:09.295797  200413 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6v49k" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:09.300355  200413 pod_ready.go:94] pod "coredns-66bc5c9577-6v49k" is "Ready"
	I1029 09:03:09.300377  200413 pod_ready.go:86] duration metric: took 4.559664ms for pod "coredns-66bc5c9577-6v49k" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:09.302308  200413 pod_ready.go:83] waiting for pod "etcd-pause-470577" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:09.306247  200413 pod_ready.go:94] pod "etcd-pause-470577" is "Ready"
	I1029 09:03:09.306272  200413 pod_ready.go:86] duration metric: took 3.940138ms for pod "etcd-pause-470577" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:09.308378  200413 pod_ready.go:83] waiting for pod "kube-apiserver-pause-470577" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:09.311897  200413 pod_ready.go:94] pod "kube-apiserver-pause-470577" is "Ready"
	I1029 09:03:09.311920  200413 pod_ready.go:86] duration metric: took 3.521562ms for pod "kube-apiserver-pause-470577" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:09.313835  200413 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-470577" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:09.696283  200413 pod_ready.go:94] pod "kube-controller-manager-pause-470577" is "Ready"
	I1029 09:03:09.696306  200413 pod_ready.go:86] duration metric: took 382.452994ms for pod "kube-controller-manager-pause-470577" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:09.896551  200413 pod_ready.go:83] waiting for pod "kube-proxy-bqbws" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:10.296380  200413 pod_ready.go:94] pod "kube-proxy-bqbws" is "Ready"
	I1029 09:03:10.296408  200413 pod_ready.go:86] duration metric: took 399.830991ms for pod "kube-proxy-bqbws" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:10.496920  200413 pod_ready.go:83] waiting for pod "kube-scheduler-pause-470577" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:10.896684  200413 pod_ready.go:94] pod "kube-scheduler-pause-470577" is "Ready"
	I1029 09:03:10.896715  200413 pod_ready.go:86] duration metric: took 399.763639ms for pod "kube-scheduler-pause-470577" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:03:10.896728  200413 pod_ready.go:40] duration metric: took 1.604660801s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:03:10.955424  200413 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:03:10.957521  200413 out.go:179] * Done! kubectl is now configured to use "pause-470577" cluster and "default" namespace by default
	I1029 09:03:10.513337  201876 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:03:10.554886  201876 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:03:10.560625  201876 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:03:10.560699  201876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:03:10.570964  201876 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:03:10.571020  201876 start.go:496] detecting cgroup driver to use...
	I1029 09:03:10.571059  201876 detect.go:190] detected "systemd" cgroup driver on host os
	I1029 09:03:10.571129  201876 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:03:10.588841  201876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:03:10.604127  201876 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:03:10.604187  201876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:03:10.621069  201876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:03:10.638493  201876 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:03:10.750475  201876 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:03:10.844883  201876 docker.go:234] disabling docker service ...
	I1029 09:03:10.844952  201876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:03:10.860791  201876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:03:10.876008  201876 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:03:11.005796  201876 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:03:11.140388  201876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:03:11.157964  201876 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:03:11.174266  201876 download.go:108] Downloading: https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm.sha1 -> /home/jenkins/minikube-integration/21800-3727/.minikube/cache/linux/amd64/v0.0.0/kubeadm
	I1029 09:03:08.458468  198407 out.go:252]   - Generating certificates and keys ...
	I1029 09:03:08.458591  198407 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1029 09:03:08.458687  198407 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1029 09:03:08.694528  198407 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1029 09:03:09.240939  198407 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1029 09:03:09.456210  198407 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1029 09:03:09.573931  198407 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1029 09:03:09.695055  198407 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1029 09:03:09.695246  198407 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-699681 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1029 09:03:09.880899  198407 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1029 09:03:09.881130  198407 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-699681 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1029 09:03:10.433817  198407 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1029 09:03:10.670727  198407 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 09:03:10.867321  198407 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 09:03:10.867499  198407 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 09:03:11.023436  198407 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 09:03:11.243787  198407 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1029 09:03:12.101035  198407 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 09:03:12.566689  198407 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 09:03:12.847259  198407 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 09:03:12.848974  198407 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 09:03:12.857625  198407 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 09:03:12.880965  198407 out.go:252]   - Booting up control plane ...
	I1029 09:03:12.881133  198407 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 09:03:12.881249  198407 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 09:03:12.881346  198407 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 09:03:12.881488  198407 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 09:03:12.881609  198407 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1029 09:03:12.889173  198407 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1029 09:03:12.889763  198407 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 09:03:12.889826  198407 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 09:03:13.019255  198407 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1029 09:03:13.019472  198407 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1029 09:03:11.373176  201876 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1029 09:03:11.373254  201876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:03:11.385903  201876 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1029 09:03:11.385971  201876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:03:11.396325  201876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:03:11.406761  201876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:03:11.417015  201876 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:03:11.426363  201876 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:03:11.435070  201876 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:03:11.443849  201876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:03:11.553635  201876 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:03:13.963124  201876 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.409451612s)
	I1029 09:03:13.963162  201876 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:03:13.963214  201876 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:03:13.967444  201876 start.go:564] Will wait 60s for crictl version
	I1029 09:03:13.967502  201876 ssh_runner.go:195] Run: which crictl
	I1029 09:03:13.972018  201876 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:03:13.999684  201876 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:03:13.999773  201876 ssh_runner.go:195] Run: crio --version
	I1029 09:03:14.032268  201876 ssh_runner.go:195] Run: crio --version
	I1029 09:03:14.072079  201876 out.go:179] * Preparing CRI-O 1.34.1 ...
	I1029 09:03:14.073761  201876 ssh_runner.go:195] Run: rm -f paused
	I1029 09:03:14.079508  201876 out.go:179] * Done! minikube is ready without Kubernetes!
	I1029 09:03:14.083761  201876 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	I1029 09:03:09.903194  203542 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1029 09:03:09.903415  203542 start.go:159] libmachine.API.Create for "cert-expiration-230123" (driver="docker")
	I1029 09:03:09.903443  203542 client.go:173] LocalClient.Create starting
	I1029 09:03:09.903512  203542 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem
	I1029 09:03:09.903540  203542 main.go:143] libmachine: Decoding PEM data...
	I1029 09:03:09.903552  203542 main.go:143] libmachine: Parsing certificate...
	I1029 09:03:09.903623  203542 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem
	I1029 09:03:09.903641  203542 main.go:143] libmachine: Decoding PEM data...
	I1029 09:03:09.903648  203542 main.go:143] libmachine: Parsing certificate...
	I1029 09:03:09.903955  203542 cli_runner.go:164] Run: docker network inspect cert-expiration-230123 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1029 09:03:09.922464  203542 cli_runner.go:211] docker network inspect cert-expiration-230123 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1029 09:03:09.922543  203542 network_create.go:284] running [docker network inspect cert-expiration-230123] to gather additional debugging logs...
	I1029 09:03:09.922561  203542 cli_runner.go:164] Run: docker network inspect cert-expiration-230123
	W1029 09:03:09.941037  203542 cli_runner.go:211] docker network inspect cert-expiration-230123 returned with exit code 1
	I1029 09:03:09.941059  203542 network_create.go:287] error running [docker network inspect cert-expiration-230123]: docker network inspect cert-expiration-230123: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-230123 not found
	I1029 09:03:09.941070  203542 network_create.go:289] output of [docker network inspect cert-expiration-230123]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-230123 not found
	
	** /stderr **
	I1029 09:03:09.941217  203542 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:03:09.960040  203542 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b27c046ec42 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:96:bc:cb:4a:50:f2} reservation:<nil>}
	I1029 09:03:09.960555  203542 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0c15025939eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:79:05:d8:32:73} reservation:<nil>}
	I1029 09:03:09.961055  203542 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5e92a9c19423 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:ec:bb:72:ab:23} reservation:<nil>}
	I1029 09:03:09.961649  203542 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3a452800cf52 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:c7:f9:4a:e8:ac} reservation:<nil>}
	I1029 09:03:09.962398  203542 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-ea35bc3b260c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:82:af:a8:68:e3:30} reservation:<nil>}
	I1029 09:03:09.963308  203542 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb7ee0}
	I1029 09:03:09.963328  203542 network_create.go:124] attempt to create docker network cert-expiration-230123 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1029 09:03:09.963381  203542 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-230123 cert-expiration-230123
	I1029 09:03:10.026401  203542 network_create.go:108] docker network cert-expiration-230123 192.168.94.0/24 created
	I1029 09:03:10.026424  203542 kic.go:121] calculated static IP "192.168.94.2" for the "cert-expiration-230123" container
	I1029 09:03:10.026482  203542 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1029 09:03:10.046216  203542 cli_runner.go:164] Run: docker volume create cert-expiration-230123 --label name.minikube.sigs.k8s.io=cert-expiration-230123 --label created_by.minikube.sigs.k8s.io=true
	I1029 09:03:10.066909  203542 oci.go:103] Successfully created a docker volume cert-expiration-230123
	I1029 09:03:10.066971  203542 cli_runner.go:164] Run: docker run --rm --name cert-expiration-230123-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-230123 --entrypoint /usr/bin/test -v cert-expiration-230123:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1029 09:03:10.480570  203542 oci.go:107] Successfully prepared a docker volume cert-expiration-230123
	I1029 09:03:10.480617  203542 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:03:10.480641  203542 kic.go:194] Starting extracting preloaded images to volume ...
	I1029 09:03:10.480720  203542 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-230123:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1029 09:03:13.834545  203542 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-230123:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.35376152s)
	I1029 09:03:13.834582  203542 kic.go:203] duration metric: took 3.353939258s to extract preloaded images to volume ...
	W1029 09:03:13.834686  203542 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1029 09:03:13.834714  203542 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1029 09:03:13.834754  203542 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1029 09:03:13.908843  203542 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-230123 --name cert-expiration-230123 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-230123 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-230123 --network cert-expiration-230123 --ip 192.168.94.2 --volume cert-expiration-230123:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1029 09:03:14.239816  203542 cli_runner.go:164] Run: docker container inspect cert-expiration-230123 --format={{.State.Running}}
	I1029 09:03:14.264272  203542 cli_runner.go:164] Run: docker container inspect cert-expiration-230123 --format={{.State.Status}}
	I1029 09:03:14.288466  203542 cli_runner.go:164] Run: docker exec cert-expiration-230123 stat /var/lib/dpkg/alternatives/iptables
	I1029 09:03:14.341767  203542 oci.go:144] the created container "cert-expiration-230123" has a running status.
	I1029 09:03:14.341809  203542 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/cert-expiration-230123/id_rsa...
	I1029 09:03:14.466852  203542 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21800-3727/.minikube/machines/cert-expiration-230123/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1029 09:03:14.503498  203542 cli_runner.go:164] Run: docker container inspect cert-expiration-230123 --format={{.State.Status}}
	I1029 09:03:14.531027  203542 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1029 09:03:14.531041  203542 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-230123 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1029 09:03:14.602761  203542 cli_runner.go:164] Run: docker container inspect cert-expiration-230123 --format={{.State.Status}}
	I1029 09:03:14.629708  203542 machine.go:94] provisionDockerMachine start ...
	I1029 09:03:14.629841  203542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-230123
	I1029 09:03:14.655954  203542 main.go:143] libmachine: Using SSH client type: native
	I1029 09:03:14.656333  203542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1029 09:03:14.656342  203542 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:03:14.657057  203542 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55110->127.0.0.1:33013: read: connection reset by peer
	
	
	==> CRI-O <==
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.664754627Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.665643807Z" level=info msg="Conmon does support the --sync option"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.665670452Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.665685329Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.66647355Z" level=info msg="Conmon does support the --sync option"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.666492145Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.671374632Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.671405205Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.672009363Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.67246151Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.672525898Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.678285147Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.727606235Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-6v49k Namespace:kube-system ID:7a4da8e8c7196bcdcb5359b75fdec570ea4472b216b4318ff17932bae94cb4fc UID:55856b5d-4d88-46ea-867b-fb904a23bd57 NetNS:/var/run/netns/c13cea3f-1c15-4233-8df4-8a01b409fc9b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ac80}] Aliases:map[]}"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.727894471Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-6v49k for CNI network kindnet (type=ptp)"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728516688Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728547912Z" level=info msg="Starting seccomp notifier watcher"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728613701Z" level=info msg="Create NRI interface"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728715099Z" level=info msg="built-in NRI default validator is disabled"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728731443Z" level=info msg="runtime interface created"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728741284Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728746082Z" level=info msg="runtime interface starting up..."
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728751092Z" level=info msg="starting plugins..."
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.728762988Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 29 09:03:07 pause-470577 crio[2185]: time="2025-10-29T09:03:07.729086296Z" level=info msg="No systemd watchdog enabled"
	Oct 29 09:03:07 pause-470577 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	f3e422aa25145       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago       Running             coredns                   0                   7a4da8e8c7196       coredns-66bc5c9577-6v49k               kube-system
	850cb974c0fce       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   56 seconds ago       Running             kube-proxy                0                   290cccc322a17       kube-proxy-bqbws                       kube-system
	b8fcde080ae08       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   56 seconds ago       Running             kindnet-cni               0                   80f063baa9f44       kindnet-tkv8d                          kube-system
	c3bc4e6855a30       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Running             etcd                      0                   c26af36470cd1       etcd-pause-470577                      kube-system
	ccd4699a211e9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Running             kube-controller-manager   0                   3970681cce0fd       kube-controller-manager-pause-470577   kube-system
	4f2c8636745db       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Running             kube-scheduler            0                   645a44e386292       kube-scheduler-pause-470577            kube-system
	b94585b063ecb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Running             kube-apiserver            0                   bd0e0755894da       kube-apiserver-pause-470577            kube-system
	
	
	==> coredns [f3e422aa25145c663153c917c76efc65f478c96247053f901e05108a0d8f5aa2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55930 - 49272 "HINFO IN 8875921612014454862.4833462212965654197. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.110871398s
	
	
	==> describe nodes <==
	Name:               pause-470577
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-470577
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=pause-470577
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_02_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:02:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-470577
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:03:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:03:00 +0000   Wed, 29 Oct 2025 09:02:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:03:00 +0000   Wed, 29 Oct 2025 09:02:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:03:00 +0000   Wed, 29 Oct 2025 09:02:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:03:00 +0000   Wed, 29 Oct 2025 09:03:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-470577
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                329e3ac1-f6c5-46e9-9bcc-483df219274f
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-6v49k                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     56s
	  kube-system                 etcd-pause-470577                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         62s
	  kube-system                 kindnet-tkv8d                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-pause-470577             250m (3%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-pause-470577    200m (2%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-bqbws                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-pause-470577             100m (1%)     0 (0%)      0 (0%)           0 (0%)         63s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 56s   kube-proxy       
	  Normal  Starting                 62s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s   kubelet          Node pause-470577 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s   kubelet          Node pause-470577 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s   kubelet          Node pause-470577 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s   node-controller  Node pause-470577 event: Registered Node pause-470577 in Controller
	  Normal  NodeReady                16s   kubelet          Node pause-470577 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.101648] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029373] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.989088] kauditd_printk_skb: 47 callbacks suppressed
	[Oct29 08:23] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.056844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.023928] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000035] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.023834] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +1.023898] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +2.047751] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +4.031591] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[  +8.063160] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[ +16.382216] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 08:24] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	
	
	==> etcd [c3bc4e6855a3033b28fd7c26d7a35b91fff3c39be0c2aab57f86ab1f7a6f3c11] <==
	{"level":"warn","ts":"2025-10-29T09:02:11.114241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.120507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.136673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.151533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.166111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.182256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.194083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.205578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.218143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.250350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.264273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.279735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.286883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.298906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.310815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.324748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.333324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.341541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.349429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.358594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.367015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.390839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.398368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.409081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:02:11.469700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33814","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:03:16 up 45 min,  0 user,  load average: 3.75, 2.05, 1.40
	Linux pause-470577 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b8fcde080ae08d40f388d6152681d0b178abcdd5cff2c6772f6bcc6980381a0a] <==
	I1029 09:02:20.338126       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:02:20.338604       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:02:20.338808       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:02:20.338849       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:02:20.338872       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:02:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:02:20.623081       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:02:20.623114       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:02:20.623124       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:02:20.623837       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1029 09:02:50.623722       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1029 09:02:50.623725       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1029 09:02:50.623726       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1029 09:02:50.624186       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1029 09:02:52.223292       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:02:52.223326       1 metrics.go:72] Registering metrics
	I1029 09:02:52.223408       1 controller.go:711] "Syncing nftables rules"
	I1029 09:03:00.629121       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:03:00.629179       1 main.go:301] handling current node
	I1029 09:03:10.631187       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:03:10.631249       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b94585b063ecb11c8ea16b9f526487723c9e9d4fdc61153923721f26a42ef4ba] <==
	E1029 09:02:12.190329       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1029 09:02:12.234902       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:02:12.247959       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:02:12.250685       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1029 09:02:12.259836       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:02:12.263408       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:02:12.272696       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:02:13.030146       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1029 09:02:13.033613       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1029 09:02:13.033634       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:02:13.583565       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:02:13.628166       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:02:13.737260       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1029 09:02:13.746148       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1029 09:02:13.747566       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:02:13.755072       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:02:14.090298       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:02:14.585956       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:02:14.595389       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1029 09:02:14.602405       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 09:02:19.094264       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1029 09:02:19.094265       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1029 09:02:19.945651       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:02:19.997154       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:02:20.002178       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [ccd4699a211e9aa9fc943cc8b5a917cb6496ee857828ec92e00a99c4eccef775] <==
	I1029 09:02:19.089795       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:02:19.089852       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:02:19.089858       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:02:19.090947       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 09:02:19.090979       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:02:19.091042       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:02:19.091142       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 09:02:19.091164       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 09:02:19.091275       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 09:02:19.091308       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1029 09:02:19.091183       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:02:19.091193       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 09:02:19.091191       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1029 09:02:19.091836       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:02:19.091856       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 09:02:19.091865       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 09:02:19.092673       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1029 09:02:19.096825       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:02:19.097942       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:02:19.099222       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 09:02:19.105279       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:02:19.112054       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:02:19.117700       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 09:02:19.119305       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:03:04.047772       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [850cb974c0fce9f271c5771918fbf2264257b76cd9e075d5544e45d28ab6dbe5] <==
	I1029 09:02:20.228755       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:02:20.324903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:02:20.426443       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:02:20.426568       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1029 09:02:20.426718       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:02:20.456432       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:02:20.456617       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:02:20.463275       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:02:20.463949       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:02:20.464088       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:02:20.466615       1 config.go:200] "Starting service config controller"
	I1029 09:02:20.466640       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:02:20.466699       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:02:20.466706       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:02:20.466751       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:02:20.466756       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:02:20.468017       1 config.go:309] "Starting node config controller"
	I1029 09:02:20.468041       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:02:20.566821       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 09:02:20.566869       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:02:20.566917       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:02:20.568196       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [4f2c8636745db7dc71fc4459d44cb39b28a4ab71dcf21c9341f4b0795cde0af8] <==
	E1029 09:02:12.186622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 09:02:12.186738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 09:02:12.186847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 09:02:12.186946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 09:02:12.187055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 09:02:12.187170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 09:02:12.187292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 09:02:12.187410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 09:02:12.187510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:02:12.187636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 09:02:12.187779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 09:02:12.188576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:02:12.191147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 09:02:13.020798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 09:02:13.034051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1029 09:02:13.111552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:02:13.147119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 09:02:13.147244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 09:02:13.167971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 09:02:13.173242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 09:02:13.311363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 09:02:13.317024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 09:02:13.347055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 09:02:13.357966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1029 09:02:15.966501       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:02:19 pause-470577 kubelet[1326]: I1029 09:02:19.168548    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca78ec0e-a7c1-4000-ac53-c7bad59a73f7-kube-proxy\") pod \"kube-proxy-bqbws\" (UID: \"ca78ec0e-a7c1-4000-ac53-c7bad59a73f7\") " pod="kube-system/kube-proxy-bqbws"
	Oct 29 09:02:19 pause-470577 kubelet[1326]: I1029 09:02:19.168575    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2w55\" (UniqueName: \"kubernetes.io/projected/ca78ec0e-a7c1-4000-ac53-c7bad59a73f7-kube-api-access-d2w55\") pod \"kube-proxy-bqbws\" (UID: \"ca78ec0e-a7c1-4000-ac53-c7bad59a73f7\") " pod="kube-system/kube-proxy-bqbws"
	Oct 29 09:02:19 pause-470577 kubelet[1326]: I1029 09:02:19.168599    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9-cni-cfg\") pod \"kindnet-tkv8d\" (UID: \"6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9\") " pod="kube-system/kindnet-tkv8d"
	Oct 29 09:02:19 pause-470577 kubelet[1326]: I1029 09:02:19.168624    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnzn9\" (UniqueName: \"kubernetes.io/projected/6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9-kube-api-access-vnzn9\") pod \"kindnet-tkv8d\" (UID: \"6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9\") " pod="kube-system/kindnet-tkv8d"
	Oct 29 09:02:19 pause-470577 kubelet[1326]: I1029 09:02:19.168667    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca78ec0e-a7c1-4000-ac53-c7bad59a73f7-xtables-lock\") pod \"kube-proxy-bqbws\" (UID: \"ca78ec0e-a7c1-4000-ac53-c7bad59a73f7\") " pod="kube-system/kube-proxy-bqbws"
	Oct 29 09:02:19 pause-470577 kubelet[1326]: E1029 09:02:19.277095    1326 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 29 09:02:19 pause-470577 kubelet[1326]: E1029 09:02:19.277138    1326 projected.go:196] Error preparing data for projected volume kube-api-access-d2w55 for pod kube-system/kube-proxy-bqbws: configmap "kube-root-ca.crt" not found
	Oct 29 09:02:19 pause-470577 kubelet[1326]: E1029 09:02:19.277098    1326 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 29 09:02:19 pause-470577 kubelet[1326]: E1029 09:02:19.277221    1326 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ca78ec0e-a7c1-4000-ac53-c7bad59a73f7-kube-api-access-d2w55 podName:ca78ec0e-a7c1-4000-ac53-c7bad59a73f7 nodeName:}" failed. No retries permitted until 2025-10-29 09:02:19.777192065 +0000 UTC m=+5.420460601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d2w55" (UniqueName: "kubernetes.io/projected/ca78ec0e-a7c1-4000-ac53-c7bad59a73f7-kube-api-access-d2w55") pod "kube-proxy-bqbws" (UID: "ca78ec0e-a7c1-4000-ac53-c7bad59a73f7") : configmap "kube-root-ca.crt" not found
	Oct 29 09:02:19 pause-470577 kubelet[1326]: E1029 09:02:19.277226    1326 projected.go:196] Error preparing data for projected volume kube-api-access-vnzn9 for pod kube-system/kindnet-tkv8d: configmap "kube-root-ca.crt" not found
	Oct 29 09:02:19 pause-470577 kubelet[1326]: E1029 09:02:19.277287    1326 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9-kube-api-access-vnzn9 podName:6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9 nodeName:}" failed. No retries permitted until 2025-10-29 09:02:19.777267366 +0000 UTC m=+5.420535905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vnzn9" (UniqueName: "kubernetes.io/projected/6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9-kube-api-access-vnzn9") pod "kindnet-tkv8d" (UID: "6a6b8c2e-e830-4300-aa7b-eb7f2fe0e6a9") : configmap "kube-root-ca.crt" not found
	Oct 29 09:02:20 pause-470577 kubelet[1326]: I1029 09:02:20.536776    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tkv8d" podStartSLOduration=1.536754414 podStartE2EDuration="1.536754414s" podCreationTimestamp="2025-10-29 09:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:02:20.523839227 +0000 UTC m=+6.167107784" watchObservedRunningTime="2025-10-29 09:02:20.536754414 +0000 UTC m=+6.180022973"
	Oct 29 09:02:20 pause-470577 kubelet[1326]: I1029 09:02:20.564601    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bqbws" podStartSLOduration=1.564572809 podStartE2EDuration="1.564572809s" podCreationTimestamp="2025-10-29 09:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:02:20.564302005 +0000 UTC m=+6.207570562" watchObservedRunningTime="2025-10-29 09:02:20.564572809 +0000 UTC m=+6.207841364"
	Oct 29 09:03:00 pause-470577 kubelet[1326]: I1029 09:03:00.839413    1326 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 29 09:03:00 pause-470577 kubelet[1326]: I1029 09:03:00.974948    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55856b5d-4d88-46ea-867b-fb904a23bd57-config-volume\") pod \"coredns-66bc5c9577-6v49k\" (UID: \"55856b5d-4d88-46ea-867b-fb904a23bd57\") " pod="kube-system/coredns-66bc5c9577-6v49k"
	Oct 29 09:03:00 pause-470577 kubelet[1326]: I1029 09:03:00.975025    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94bxg\" (UniqueName: \"kubernetes.io/projected/55856b5d-4d88-46ea-867b-fb904a23bd57-kube-api-access-94bxg\") pod \"coredns-66bc5c9577-6v49k\" (UID: \"55856b5d-4d88-46ea-867b-fb904a23bd57\") " pod="kube-system/coredns-66bc5c9577-6v49k"
	Oct 29 09:03:01 pause-470577 kubelet[1326]: I1029 09:03:01.623754    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6v49k" podStartSLOduration=41.623731797 podStartE2EDuration="41.623731797s" podCreationTimestamp="2025-10-29 09:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:03:01.608715445 +0000 UTC m=+47.251984003" watchObservedRunningTime="2025-10-29 09:03:01.623731797 +0000 UTC m=+47.267000355"
	Oct 29 09:03:07 pause-470577 kubelet[1326]: W1029 09:03:07.609595    1326 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 29 09:03:07 pause-470577 kubelet[1326]: E1029 09:03:07.609708    1326 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 29 09:03:07 pause-470577 kubelet[1326]: E1029 09:03:07.609762    1326 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 29 09:03:07 pause-470577 kubelet[1326]: E1029 09:03:07.609776    1326 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 29 09:03:11 pause-470577 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:03:11 pause-470577 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:03:11 pause-470577 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 29 09:03:11 pause-470577 systemd[1]: kubelet.service: Consumed 2.295s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-470577 -n pause-470577
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-470577 -n pause-470577: exit status 2 (366.496736ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-470577 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-096492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-096492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (281.735553ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:09:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-096492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-096492 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-096492 describe deploy/metrics-server -n kube-system: exit status 1 (64.826845ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-096492 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-096492
helpers_test.go:243: (dbg) docker inspect old-k8s-version-096492:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487",
	        "Created": "2025-10-29T09:08:32.774738315Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 287355,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:08:32.824952453Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487/hostname",
	        "HostsPath": "/var/lib/docker/containers/949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487/hosts",
	        "LogPath": "/var/lib/docker/containers/949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487/949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487-json.log",
	        "Name": "/old-k8s-version-096492",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-096492:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-096492",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487",
	                "LowerDir": "/var/lib/docker/overlay2/3dd617d7720a614d5c6d58f2524fa03b6bedc6f5d6a5c3f937ac49410148bfab-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3dd617d7720a614d5c6d58f2524fa03b6bedc6f5d6a5c3f937ac49410148bfab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3dd617d7720a614d5c6d58f2524fa03b6bedc6f5d6a5c3f937ac49410148bfab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3dd617d7720a614d5c6d58f2524fa03b6bedc6f5d6a5c3f937ac49410148bfab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-096492",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-096492/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-096492",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-096492",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-096492",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "11686d7597821d82fea344d3d3996e945df61d67b15fab36a3fe05df3c0224c8",
	            "SandboxKey": "/var/run/docker/netns/11686d759782",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-096492": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:e2:9b:91:0f:a8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e1d4705eea8799ddd63b1a9cbeb0ede40231eb0a1209d909b2eae8f7a7d7c543",
	                    "EndpointID": "9bfc39e01d881efd86571ab6b67ef12b8c99f12c205e33b6e1d390bdb0907463",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-096492",
	                        "949e662a4724"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096492 -n old-k8s-version-096492
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-096492 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-096492 logs -n 25: (2.333482286s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-240549 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                    │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                   │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo docker system info                                                                                                                                 │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cri-dockerd --version                                                                                                                              │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo containerd config dump                                                                                                                             │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo crio config                                                                                                                                        │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ delete  │ -p bridge-240549                                                                                                                                                         │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ delete  │ -p disable-driver-mounts-318335                                                                                                                                          │ disable-driver-mounts-318335 │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-096492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:09:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:09:26.586398  302556 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:09:26.586683  302556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:09:26.586702  302556 out.go:374] Setting ErrFile to fd 2...
	I1029 09:09:26.586705  302556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:09:26.587046  302556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:09:26.587819  302556 out.go:368] Setting JSON to false
	I1029 09:09:26.589246  302556 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3115,"bootTime":1761725852,"procs":349,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:09:26.589311  302556 start.go:143] virtualization: kvm guest
	I1029 09:09:26.591194  302556 out.go:179] * [default-k8s-diff-port-017274] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:09:26.592802  302556 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:09:26.593053  302556 notify.go:221] Checking for updates...
	I1029 09:09:26.595353  302556 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:09:26.596548  302556 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:09:26.597654  302556 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:09:26.598716  302556 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:09:26.602237  302556 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:09:26.603973  302556 config.go:182] Loaded profile config "embed-certs-834228": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:09:26.604131  302556 config.go:182] Loaded profile config "no-preload-043790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:09:26.604202  302556 config.go:182] Loaded profile config "old-k8s-version-096492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1029 09:09:26.604301  302556 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:09:26.630525  302556 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:09:26.630664  302556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:09:26.690543  302556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-29 09:09:26.679954066 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:09:26.690659  302556 docker.go:319] overlay module found
	I1029 09:09:26.692453  302556 out.go:179] * Using the docker driver based on user configuration
	I1029 09:09:26.693655  302556 start.go:309] selected driver: docker
	I1029 09:09:26.693673  302556 start.go:930] validating driver "docker" against <nil>
	I1029 09:09:26.693686  302556 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:09:26.694285  302556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:09:26.755644  302556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-29 09:09:26.744845414 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:09:26.755830  302556 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 09:09:26.756121  302556 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:09:26.757833  302556 out.go:179] * Using Docker driver with root privileges
	I1029 09:09:26.758943  302556 cni.go:84] Creating CNI manager for ""
	I1029 09:09:26.759055  302556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:09:26.759068  302556 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:09:26.759147  302556 start.go:353] cluster config:
	{Name:default-k8s-diff-port-017274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:09:26.760424  302556 out.go:179] * Starting "default-k8s-diff-port-017274" primary control-plane node in "default-k8s-diff-port-017274" cluster
	I1029 09:09:26.761536  302556 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:09:26.762616  302556 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:09:26.763817  302556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:09:26.763879  302556 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:09:26.763907  302556 cache.go:59] Caching tarball of preloaded images
	I1029 09:09:26.763942  302556 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:09:26.764036  302556 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:09:26.764054  302556 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:09:26.764201  302556 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/config.json ...
	I1029 09:09:26.764232  302556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/config.json: {Name:mke336b64d933f60f421058bc59f599f614cb71e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:09:26.786506  302556 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:09:26.786533  302556 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:09:26.786551  302556 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:09:26.786581  302556 start.go:360] acquireMachinesLock for default-k8s-diff-port-017274: {Name:mkec68307c2ffe0cd4f9e8fcf3c8e2dc4c6d4bfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:09:26.786704  302556 start.go:364] duration metric: took 102.967µs to acquireMachinesLock for "default-k8s-diff-port-017274"
	I1029 09:09:26.786737  302556 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-017274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:09:26.786849  302556 start.go:125] createHost starting for "" (driver="docker")
	I1029 09:09:26.188355  292184 pod_ready.go:94] pod "kube-proxy-bxthb" is "Ready"
	I1029 09:09:26.188389  292184 pod_ready.go:86] duration metric: took 399.779532ms for pod "kube-proxy-bxthb" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.388706  292184 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.788345  292184 pod_ready.go:94] pod "kube-scheduler-embed-certs-834228" is "Ready"
	I1029 09:09:26.788388  292184 pod_ready.go:86] duration metric: took 399.616885ms for pod "kube-scheduler-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.788403  292184 pod_ready.go:40] duration metric: took 1.605163852s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:09:26.839284  292184 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:09:26.842361  292184 out.go:179] * Done! kubectl is now configured to use "embed-certs-834228" cluster and "default" namespace by default
	W1029 09:09:24.627967  287408 node_ready.go:57] node "no-preload-043790" has "Ready":"False" status (will retry)
	I1029 09:09:25.628113  287408 node_ready.go:49] node "no-preload-043790" is "Ready"
	I1029 09:09:25.628142  287408 node_ready.go:38] duration metric: took 13.503742028s for node "no-preload-043790" to be "Ready" ...
	I1029 09:09:25.628161  287408 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:09:25.628222  287408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:09:25.661909  287408 api_server.go:72] duration metric: took 13.910561121s to wait for apiserver process to appear ...
	I1029 09:09:25.661940  287408 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:09:25.661963  287408 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1029 09:09:25.669411  287408 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1029 09:09:25.670743  287408 api_server.go:141] control plane version: v1.34.1
	I1029 09:09:25.670789  287408 api_server.go:131] duration metric: took 8.839676ms to wait for apiserver health ...
	I1029 09:09:25.670800  287408 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:09:25.676908  287408 system_pods.go:59] 8 kube-system pods found
	I1029 09:09:25.676956  287408 system_pods.go:61] "coredns-66bc5c9577-bgslp" [8f0fcbc0-6872-42e0-a601-21fc1d777bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:09:25.676966  287408 system_pods.go:61] "etcd-no-preload-043790" [8021c438-763d-43d2-a61b-10a533eafb94] Running
	I1029 09:09:25.676974  287408 system_pods.go:61] "kindnet-dlrgv" [f12f7640-1309-4575-aa29-6f262b956f0a] Running
	I1029 09:09:25.676986  287408 system_pods.go:61] "kube-apiserver-no-preload-043790" [2633f749-fbf0-4a24-8fbb-574f6ac7d7a6] Running
	I1029 09:09:25.677028  287408 system_pods.go:61] "kube-controller-manager-no-preload-043790" [2810e859-4eda-4452-aa92-849c03b5f453] Running
	I1029 09:09:25.677033  287408 system_pods.go:61] "kube-proxy-7dc8p" [0ba63a1c-9709-4ebd-8ca2-664740d92a22] Running
	I1029 09:09:25.677038  287408 system_pods.go:61] "kube-scheduler-no-preload-043790" [ce9f47b3-0716-4567-8724-18d1ebc54ced] Running
	I1029 09:09:25.677045  287408 system_pods.go:61] "storage-provisioner" [224fa5f2-7b79-4a88-aff2-e3015c0eb63f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:09:25.677058  287408 system_pods.go:74] duration metric: took 6.25048ms to wait for pod list to return data ...
	I1029 09:09:25.677068  287408 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:09:25.680283  287408 default_sa.go:45] found service account: "default"
	I1029 09:09:25.680308  287408 default_sa.go:55] duration metric: took 3.233907ms for default service account to be created ...
	I1029 09:09:25.680319  287408 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:09:25.683528  287408 system_pods.go:86] 8 kube-system pods found
	I1029 09:09:25.683577  287408 system_pods.go:89] "coredns-66bc5c9577-bgslp" [8f0fcbc0-6872-42e0-a601-21fc1d777bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:09:25.683587  287408 system_pods.go:89] "etcd-no-preload-043790" [8021c438-763d-43d2-a61b-10a533eafb94] Running
	I1029 09:09:25.683594  287408 system_pods.go:89] "kindnet-dlrgv" [f12f7640-1309-4575-aa29-6f262b956f0a] Running
	I1029 09:09:25.683601  287408 system_pods.go:89] "kube-apiserver-no-preload-043790" [2633f749-fbf0-4a24-8fbb-574f6ac7d7a6] Running
	I1029 09:09:25.683608  287408 system_pods.go:89] "kube-controller-manager-no-preload-043790" [2810e859-4eda-4452-aa92-849c03b5f453] Running
	I1029 09:09:25.683613  287408 system_pods.go:89] "kube-proxy-7dc8p" [0ba63a1c-9709-4ebd-8ca2-664740d92a22] Running
	I1029 09:09:25.683618  287408 system_pods.go:89] "kube-scheduler-no-preload-043790" [ce9f47b3-0716-4567-8724-18d1ebc54ced] Running
	I1029 09:09:25.683626  287408 system_pods.go:89] "storage-provisioner" [224fa5f2-7b79-4a88-aff2-e3015c0eb63f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:09:25.683658  287408 retry.go:31] will retry after 280.052496ms: missing components: kube-dns
	I1029 09:09:25.967483  287408 system_pods.go:86] 8 kube-system pods found
	I1029 09:09:25.967520  287408 system_pods.go:89] "coredns-66bc5c9577-bgslp" [8f0fcbc0-6872-42e0-a601-21fc1d777bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:09:25.967527  287408 system_pods.go:89] "etcd-no-preload-043790" [8021c438-763d-43d2-a61b-10a533eafb94] Running
	I1029 09:09:25.967532  287408 system_pods.go:89] "kindnet-dlrgv" [f12f7640-1309-4575-aa29-6f262b956f0a] Running
	I1029 09:09:25.967536  287408 system_pods.go:89] "kube-apiserver-no-preload-043790" [2633f749-fbf0-4a24-8fbb-574f6ac7d7a6] Running
	I1029 09:09:25.967541  287408 system_pods.go:89] "kube-controller-manager-no-preload-043790" [2810e859-4eda-4452-aa92-849c03b5f453] Running
	I1029 09:09:25.967544  287408 system_pods.go:89] "kube-proxy-7dc8p" [0ba63a1c-9709-4ebd-8ca2-664740d92a22] Running
	I1029 09:09:25.967547  287408 system_pods.go:89] "kube-scheduler-no-preload-043790" [ce9f47b3-0716-4567-8724-18d1ebc54ced] Running
	I1029 09:09:25.967552  287408 system_pods.go:89] "storage-provisioner" [224fa5f2-7b79-4a88-aff2-e3015c0eb63f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:09:25.967567  287408 retry.go:31] will retry after 253.86945ms: missing components: kube-dns
	I1029 09:09:26.225761  287408 system_pods.go:86] 8 kube-system pods found
	I1029 09:09:26.225795  287408 system_pods.go:89] "coredns-66bc5c9577-bgslp" [8f0fcbc0-6872-42e0-a601-21fc1d777bc3] Running
	I1029 09:09:26.225803  287408 system_pods.go:89] "etcd-no-preload-043790" [8021c438-763d-43d2-a61b-10a533eafb94] Running
	I1029 09:09:26.225809  287408 system_pods.go:89] "kindnet-dlrgv" [f12f7640-1309-4575-aa29-6f262b956f0a] Running
	I1029 09:09:26.225813  287408 system_pods.go:89] "kube-apiserver-no-preload-043790" [2633f749-fbf0-4a24-8fbb-574f6ac7d7a6] Running
	I1029 09:09:26.225819  287408 system_pods.go:89] "kube-controller-manager-no-preload-043790" [2810e859-4eda-4452-aa92-849c03b5f453] Running
	I1029 09:09:26.225822  287408 system_pods.go:89] "kube-proxy-7dc8p" [0ba63a1c-9709-4ebd-8ca2-664740d92a22] Running
	I1029 09:09:26.225826  287408 system_pods.go:89] "kube-scheduler-no-preload-043790" [ce9f47b3-0716-4567-8724-18d1ebc54ced] Running
	I1029 09:09:26.225829  287408 system_pods.go:89] "storage-provisioner" [224fa5f2-7b79-4a88-aff2-e3015c0eb63f] Running
	I1029 09:09:26.225838  287408 system_pods.go:126] duration metric: took 545.513139ms to wait for k8s-apps to be running ...
	I1029 09:09:26.225847  287408 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:09:26.225910  287408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:09:26.241077  287408 system_svc.go:56] duration metric: took 15.218505ms WaitForService to wait for kubelet
	I1029 09:09:26.241119  287408 kubeadm.go:587] duration metric: took 14.48978974s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:09:26.241143  287408 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:09:26.244412  287408 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 09:09:26.244448  287408 node_conditions.go:123] node cpu capacity is 8
	I1029 09:09:26.244465  287408 node_conditions.go:105] duration metric: took 3.314653ms to run NodePressure ...
	I1029 09:09:26.244479  287408 start.go:242] waiting for startup goroutines ...
	I1029 09:09:26.244488  287408 start.go:247] waiting for cluster config update ...
	I1029 09:09:26.244504  287408 start.go:256] writing updated cluster config ...
	I1029 09:09:26.244877  287408 ssh_runner.go:195] Run: rm -f paused
	I1029 09:09:26.249655  287408 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:09:26.254294  287408 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bgslp" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.259361  287408 pod_ready.go:94] pod "coredns-66bc5c9577-bgslp" is "Ready"
	I1029 09:09:26.259388  287408 pod_ready.go:86] duration metric: took 5.060691ms for pod "coredns-66bc5c9577-bgslp" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.261898  287408 pod_ready.go:83] waiting for pod "etcd-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.266636  287408 pod_ready.go:94] pod "etcd-no-preload-043790" is "Ready"
	I1029 09:09:26.266663  287408 pod_ready.go:86] duration metric: took 4.740634ms for pod "etcd-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.269175  287408 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.273624  287408 pod_ready.go:94] pod "kube-apiserver-no-preload-043790" is "Ready"
	I1029 09:09:26.273649  287408 pod_ready.go:86] duration metric: took 4.450389ms for pod "kube-apiserver-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.275707  287408 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.655203  287408 pod_ready.go:94] pod "kube-controller-manager-no-preload-043790" is "Ready"
	I1029 09:09:26.655236  287408 pod_ready.go:86] duration metric: took 379.505293ms for pod "kube-controller-manager-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.855462  287408 pod_ready.go:83] waiting for pod "kube-proxy-7dc8p" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:27.254802  287408 pod_ready.go:94] pod "kube-proxy-7dc8p" is "Ready"
	I1029 09:09:27.254827  287408 pod_ready.go:86] duration metric: took 399.334643ms for pod "kube-proxy-7dc8p" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:27.454125  287408 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:27.853871  287408 pod_ready.go:94] pod "kube-scheduler-no-preload-043790" is "Ready"
	I1029 09:09:27.853895  287408 pod_ready.go:86] duration metric: took 399.7441ms for pod "kube-scheduler-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:27.853907  287408 pod_ready.go:40] duration metric: took 1.604212036s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:09:27.904823  287408 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:09:27.909160  287408 out.go:179] * Done! kubectl is now configured to use "no-preload-043790" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 09:09:18 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:18.130592544Z" level=info msg="Starting container: ab1e809c7681729f98983be684278d72f817b56aec226c0fefbc51fb4ef49a2b" id=63369f4d-1295-43be-91b7-523cad169d36 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:09:18 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:18.132934939Z" level=info msg="Started container" PID=2116 containerID=ab1e809c7681729f98983be684278d72f817b56aec226c0fefbc51fb4ef49a2b description=kube-system/coredns-5dd5756b68-v5mr5/coredns id=63369f4d-1295-43be-91b7-523cad169d36 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d92fcfc3919c6e5ae3826e29489cf2c0408db57e8cad09ffea567d1da60adaee
	Oct 29 09:09:20 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:20.686974535Z" level=info msg="Running pod sandbox: default/busybox/POD" id=aadf0cd4-522d-415c-9e92-feede22dd495 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:09:20 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:20.687081538Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:09:20 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:20.692330295Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2b1b10e67eec3afc9d541216d6c8ceda27af3fb515c6494c609d88e923acc967 UID:1fa1733a-b2ef-4af9-af8c-342513147d4e NetNS:/var/run/netns/90dad6f2-ecbd-4811-a78a-251e09374802 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002d4948}] Aliases:map[]}"
	Oct 29 09:09:20 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:20.692373841Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 29 09:09:20 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:20.703261155Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2b1b10e67eec3afc9d541216d6c8ceda27af3fb515c6494c609d88e923acc967 UID:1fa1733a-b2ef-4af9-af8c-342513147d4e NetNS:/var/run/netns/90dad6f2-ecbd-4811-a78a-251e09374802 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002d4948}] Aliases:map[]}"
	Oct 29 09:09:20 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:20.703394319Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 29 09:09:20 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:20.704422949Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 29 09:09:20 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:20.705766544Z" level=info msg="Ran pod sandbox 2b1b10e67eec3afc9d541216d6c8ceda27af3fb515c6494c609d88e923acc967 with infra container: default/busybox/POD" id=aadf0cd4-522d-415c-9e92-feede22dd495 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:09:20 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:20.707170477Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a80ec971-e195-4476-a419-343c9aadfd1e name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:09:20 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:20.707326838Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a80ec971-e195-4476-a419-343c9aadfd1e name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:09:20 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:20.707377023Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a80ec971-e195-4476-a419-343c9aadfd1e name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:09:20 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:20.70790705Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=11bd9c49-24a1-452d-a277-69156b14310f name=/runtime.v1.ImageService/PullImage
	Oct 29 09:09:20 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:20.712315151Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 29 09:09:21 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:21.524458721Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=11bd9c49-24a1-452d-a277-69156b14310f name=/runtime.v1.ImageService/PullImage
	Oct 29 09:09:21 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:21.525474475Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=829c5d67-987e-4c94-a605-5a3175c32eb1 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:09:21 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:21.527337317Z" level=info msg="Creating container: default/busybox/busybox" id=b3d695b9-d6d9-4db0-b62c-f8d3eb5d11d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:09:21 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:21.52752925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:09:21 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:21.531733101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:09:21 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:21.532229656Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:09:21 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:21.565811918Z" level=info msg="Created container 5cd3c2195698e1d1d896bb43ab346f6e40f118bbde9c342ee68acfee2e3f8746: default/busybox/busybox" id=b3d695b9-d6d9-4db0-b62c-f8d3eb5d11d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:09:21 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:21.566560515Z" level=info msg="Starting container: 5cd3c2195698e1d1d896bb43ab346f6e40f118bbde9c342ee68acfee2e3f8746" id=e6005f78-eb8b-4e42-8988-4dc43a3838ba name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:09:21 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:21.568545984Z" level=info msg="Started container" PID=2193 containerID=5cd3c2195698e1d1d896bb43ab346f6e40f118bbde9c342ee68acfee2e3f8746 description=default/busybox/busybox id=e6005f78-eb8b-4e42-8988-4dc43a3838ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=2b1b10e67eec3afc9d541216d6c8ceda27af3fb515c6494c609d88e923acc967
	Oct 29 09:09:29 old-k8s-version-096492 crio[769]: time="2025-10-29T09:09:29.472920568Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	5cd3c2195698e       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   10 seconds ago      Running             busybox                   0                   2b1b10e67eec3       busybox                                          default
	ab1e809c76817       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   d92fcfc3919c6       coredns-5dd5756b68-v5mr5                         kube-system
	47f57ece3452d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   eab576c06025b       storage-provisioner                              kube-system
	c4909e1f6a51f       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   d00d21d69d7e7       kindnet-7qztm                                    kube-system
	13493fe7d02b0       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      26 seconds ago      Running             kube-proxy                0                   86d270f70cae6       kube-proxy-8kpqf                                 kube-system
	97254ba3942eb       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      45 seconds ago      Running             kube-controller-manager   0                   a75976f9f03dc       kube-controller-manager-old-k8s-version-096492   kube-system
	a6b498f7c21ff       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      45 seconds ago      Running             kube-apiserver            0                   c4c4ccba4f3c3       kube-apiserver-old-k8s-version-096492            kube-system
	e95132e289b4d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   28f1808b621a3       etcd-old-k8s-version-096492                      kube-system
	3664ea0b59506       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      45 seconds ago      Running             kube-scheduler            0                   3ba20fb3f7b35       kube-scheduler-old-k8s-version-096492            kube-system
	
	
	==> coredns [ab1e809c7681729f98983be684278d72f817b56aec226c0fefbc51fb4ef49a2b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51940 - 2134 "HINFO IN 5759986468176770844.8375114373033121756. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.061333684s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-096492
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-096492
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=old-k8s-version-096492
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_08_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:08:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-096492
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:09:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:09:23 +0000   Wed, 29 Oct 2025 09:08:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:09:23 +0000   Wed, 29 Oct 2025 09:08:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:09:23 +0000   Wed, 29 Oct 2025 09:08:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:09:23 +0000   Wed, 29 Oct 2025 09:09:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-096492
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                9ea7cc04-2266-42af-af7f-14c5bd55b0ca
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-v5mr5                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-old-k8s-version-096492                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-7qztm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-096492             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-096492    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-8kpqf                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-096492             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 49s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node old-k8s-version-096492 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-096492 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node old-k8s-version-096492 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node old-k8s-version-096492 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node old-k8s-version-096492 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node old-k8s-version-096492 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-096492 event: Registered Node old-k8s-version-096492 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-096492 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[  +7.860471] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea d1 bc 55 fa d3 08 06
	[  +0.057230] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[  +7.379065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 86 de 12 5c b8 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[Oct29 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 3e ab f0 10 3c 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 16 ea 00 8c 1d 08 06
	[  +4.650960] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 a3 db 56 0e bd 08 06
	[  +0.000357] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[ +10.158654] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	[Oct29 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 62 44 12 a2 6e 08 06
	[  +0.000472] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	
	
	==> etcd [e95132e289b4dae85659c6d12c67491b962e368b1f9e243de722cdb7766779e3] <==
	{"level":"info","ts":"2025-10-29T09:08:47.170242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-29T09:08:47.170262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-29T09:08:47.170268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-29T09:08:47.170281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-29T09:08:47.170293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-29T09:08:47.170983Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:08:47.171915Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:08:47.17204Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:08:47.17207Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:08:47.172101Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-096492 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-29T09:08:47.172256Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-29T09:08:47.172322Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-29T09:08:47.172339Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-29T09:08:47.172267Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-29T09:08:47.173859Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-29T09:08:47.174738Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-29T09:08:53.661746Z","caller":"traceutil/trace.go:171","msg":"trace[697278894] transaction","detail":"{read_only:false; response_revision:295; number_of_response:1; }","duration":"129.39732ms","start":"2025-10-29T09:08:53.532319Z","end":"2025-10-29T09:08:53.661717Z","steps":["trace[697278894] 'process raft request'  (duration: 68.307589ms)","trace[697278894] 'compare'  (duration: 60.901762ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-29T09:08:53.66179Z","caller":"traceutil/trace.go:171","msg":"trace[202453775] linearizableReadLoop","detail":"{readStateIndex:306; appliedIndex:304; }","duration":"100.788107ms","start":"2025-10-29T09:08:53.560984Z","end":"2025-10-29T09:08:53.661772Z","steps":["trace[202453775] 'read index received'  (duration: 39.703346ms)","trace[202453775] 'applied index is now lower than readState.Index'  (duration: 61.083113ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-29T09:08:53.661817Z","caller":"traceutil/trace.go:171","msg":"trace[983844491] transaction","detail":"{read_only:false; response_revision:296; number_of_response:1; }","duration":"103.109581ms","start":"2025-10-29T09:08:53.558694Z","end":"2025-10-29T09:08:53.661803Z","steps":["trace[983844491] 'process raft request'  (duration: 102.9551ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-29T09:08:53.661945Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.936477ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-096492\" ","response":"range_response_count:1 size:4952"}
	{"level":"info","ts":"2025-10-29T09:08:53.662008Z","caller":"traceutil/trace.go:171","msg":"trace[1079128969] range","detail":"{range_begin:/registry/minions/old-k8s-version-096492; range_end:; response_count:1; response_revision:296; }","duration":"101.034306ms","start":"2025-10-29T09:08:53.56095Z","end":"2025-10-29T09:08:53.661984Z","steps":["trace[1079128969] 'agreement among raft nodes before linearized reading'  (duration: 100.880295ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T09:08:53.853222Z","caller":"traceutil/trace.go:171","msg":"trace[267647039] transaction","detail":"{read_only:false; response_revision:298; number_of_response:1; }","duration":"178.295758ms","start":"2025-10-29T09:08:53.674908Z","end":"2025-10-29T09:08:53.853203Z","steps":["trace[267647039] 'process raft request'  (duration: 178.236825ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T09:08:53.853303Z","caller":"traceutil/trace.go:171","msg":"trace[293441866] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"179.844832ms","start":"2025-10-29T09:08:53.673437Z","end":"2025-10-29T09:08:53.853282Z","steps":["trace[293441866] 'process raft request'  (duration: 109.438905ms)","trace[293441866] 'compare'  (duration: 70.148453ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-29T09:08:54.018581Z","caller":"traceutil/trace.go:171","msg":"trace[922233195] transaction","detail":"{read_only:false; response_revision:299; number_of_response:1; }","duration":"152.74391ms","start":"2025-10-29T09:08:53.865803Z","end":"2025-10-29T09:08:54.018547Z","steps":["trace[922233195] 'process raft request'  (duration: 132.435552ms)","trace[922233195] 'compare'  (duration: 20.090131ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-29T09:09:31.013731Z","caller":"traceutil/trace.go:171","msg":"trace[1405391169] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"165.986041ms","start":"2025-10-29T09:09:30.847721Z","end":"2025-10-29T09:09:31.013707Z","steps":["trace[1405391169] 'process raft request'  (duration: 148.92389ms)","trace[1405391169] 'compare'  (duration: 16.893633ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:09:32 up 51 min,  0 user,  load average: 6.85, 4.12, 2.47
	Linux old-k8s-version-096492 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c4909e1f6a51f905b1d64383b3427079fada6dfdcc8a50e49a949b386d15827d] <==
	I1029 09:09:07.448193       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:09:07.448567       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:09:07.448745       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:09:07.448768       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:09:07.448804       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:09:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:09:07.651433       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:09:07.651458       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:09:07.651489       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:09:07.651667       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:09:08.047977       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:09:08.048045       1 metrics.go:72] Registering metrics
	I1029 09:09:08.048120       1 controller.go:711] "Syncing nftables rules"
	I1029 09:09:17.655201       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:09:17.655265       1 main.go:301] handling current node
	I1029 09:09:27.654128       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:09:27.654182       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a6b498f7c21ff24fcee57fa6f1b51083b9c5bc2cd34ccab310d4433d64b049e9] <==
	I1029 09:08:49.171465       1 shared_informer.go:318] Caches are synced for configmaps
	I1029 09:08:49.172724       1 controller.go:624] quota admission added evaluator for: namespaces
	I1029 09:08:49.173372       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1029 09:08:49.173469       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1029 09:08:49.173852       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1029 09:08:49.173880       1 aggregator.go:166] initial CRD sync complete...
	I1029 09:08:49.173900       1 autoregister_controller.go:141] Starting autoregister controller
	I1029 09:08:49.173907       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:08:49.173915       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:08:49.202698       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:08:50.076095       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1029 09:08:50.079908       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1029 09:08:50.079930       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:08:50.616490       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:08:50.659869       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:08:50.781256       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1029 09:08:50.787920       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1029 09:08:50.789416       1 controller.go:624] quota admission added evaluator for: endpoints
	I1029 09:08:50.794544       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:08:51.115146       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1029 09:08:52.329597       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1029 09:08:52.342507       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1029 09:08:52.355211       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1029 09:09:03.975934       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1029 09:09:04.728780       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [97254ba3942eb96560d45ce287c725da565e8ec8eaddcf7b108c64689deb468a] <==
	I1029 09:09:04.066751       1 shared_informer.go:318] Caches are synced for stateful set
	I1029 09:09:04.085627       1 shared_informer.go:318] Caches are synced for resource quota
	I1029 09:09:04.127376       1 shared_informer.go:318] Caches are synced for resource quota
	I1029 09:09:04.504658       1 shared_informer.go:318] Caches are synced for garbage collector
	I1029 09:09:04.550615       1 shared_informer.go:318] Caches are synced for garbage collector
	I1029 09:09:04.550655       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1029 09:09:04.748805       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7qztm"
	I1029 09:09:04.748828       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8kpqf"
	I1029 09:09:04.838426       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-wzdpg"
	I1029 09:09:04.853128       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-v5mr5"
	I1029 09:09:04.894514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="914.734738ms"
	I1029 09:09:04.927543       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.851642ms"
	I1029 09:09:04.928013       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="213.483µs"
	I1029 09:09:04.949709       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="857.567µs"
	I1029 09:09:04.973148       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.631µs"
	I1029 09:09:05.487051       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1029 09:09:05.500752       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-wzdpg"
	I1029 09:09:05.514320       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.477495ms"
	I1029 09:09:05.525362       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.802446ms"
	I1029 09:09:05.525514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.077µs"
	I1029 09:09:17.779569       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="137.534µs"
	I1029 09:09:17.794277       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.281µs"
	I1029 09:09:18.532758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.572368ms"
	I1029 09:09:18.532876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.446µs"
	I1029 09:09:18.972776       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [13493fe7d02b02595c66e20074465bec654886fd18bab33897b522c16d706d2d] <==
	I1029 09:09:05.323585       1 server_others.go:69] "Using iptables proxy"
	I1029 09:09:05.340439       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1029 09:09:05.384558       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:09:05.388857       1 server_others.go:152] "Using iptables Proxier"
	I1029 09:09:05.388917       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1029 09:09:05.388928       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1029 09:09:05.388970       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1029 09:09:05.389290       1 server.go:846] "Version info" version="v1.28.0"
	I1029 09:09:05.389305       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:09:05.389946       1 config.go:188] "Starting service config controller"
	I1029 09:09:05.390059       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1029 09:09:05.390215       1 config.go:315] "Starting node config controller"
	I1029 09:09:05.390270       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1029 09:09:05.392601       1 config.go:97] "Starting endpoint slice config controller"
	I1029 09:09:05.392634       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1029 09:09:05.490935       1 shared_informer.go:318] Caches are synced for service config
	I1029 09:09:05.491165       1 shared_informer.go:318] Caches are synced for node config
	I1029 09:09:05.493377       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3664ea0b595068f34ae70cf0b998226392fa149a45a29226ba47eb5bc4b2dd88] <==
	W1029 09:08:49.126894       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1029 09:08:49.126914       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1029 09:08:49.127016       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1029 09:08:49.127037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1029 09:08:49.951527       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1029 09:08:49.951568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1029 09:08:49.975598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1029 09:08:49.975639       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1029 09:08:50.040906       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1029 09:08:50.040961       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1029 09:08:50.062624       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1029 09:08:50.062662       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1029 09:08:50.106613       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1029 09:08:50.106658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1029 09:08:50.112978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1029 09:08:50.113040       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1029 09:08:50.281453       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1029 09:08:50.281496       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1029 09:08:50.322207       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1029 09:08:50.322242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1029 09:08:50.341877       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1029 09:08:50.341912       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1029 09:08:50.351588       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1029 09:08:50.351632       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1029 09:08:50.723283       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 29 09:09:03 old-k8s-version-096492 kubelet[1386]: I1029 09:09:03.995365    1386 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 29 09:09:04 old-k8s-version-096492 kubelet[1386]: I1029 09:09:04.753307    1386 topology_manager.go:215] "Topology Admit Handler" podUID="34799f5c-3bdd-4fa6-be66-a77a7ebe00f8" podNamespace="kube-system" podName="kube-proxy-8kpqf"
	Oct 29 09:09:04 old-k8s-version-096492 kubelet[1386]: I1029 09:09:04.761595    1386 topology_manager.go:215] "Topology Admit Handler" podUID="6d656d18-bd80-4efa-b002-2e13a052ff06" podNamespace="kube-system" podName="kindnet-7qztm"
	Oct 29 09:09:04 old-k8s-version-096492 kubelet[1386]: I1029 09:09:04.768400    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d656d18-bd80-4efa-b002-2e13a052ff06-xtables-lock\") pod \"kindnet-7qztm\" (UID: \"6d656d18-bd80-4efa-b002-2e13a052ff06\") " pod="kube-system/kindnet-7qztm"
	Oct 29 09:09:04 old-k8s-version-096492 kubelet[1386]: I1029 09:09:04.768481    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d656d18-bd80-4efa-b002-2e13a052ff06-lib-modules\") pod \"kindnet-7qztm\" (UID: \"6d656d18-bd80-4efa-b002-2e13a052ff06\") " pod="kube-system/kindnet-7qztm"
	Oct 29 09:09:04 old-k8s-version-096492 kubelet[1386]: I1029 09:09:04.768529    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34799f5c-3bdd-4fa6-be66-a77a7ebe00f8-xtables-lock\") pod \"kube-proxy-8kpqf\" (UID: \"34799f5c-3bdd-4fa6-be66-a77a7ebe00f8\") " pod="kube-system/kube-proxy-8kpqf"
	Oct 29 09:09:04 old-k8s-version-096492 kubelet[1386]: I1029 09:09:04.768561    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/34799f5c-3bdd-4fa6-be66-a77a7ebe00f8-kube-proxy\") pod \"kube-proxy-8kpqf\" (UID: \"34799f5c-3bdd-4fa6-be66-a77a7ebe00f8\") " pod="kube-system/kube-proxy-8kpqf"
	Oct 29 09:09:04 old-k8s-version-096492 kubelet[1386]: I1029 09:09:04.768598    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsxmz\" (UniqueName: \"kubernetes.io/projected/6d656d18-bd80-4efa-b002-2e13a052ff06-kube-api-access-wsxmz\") pod \"kindnet-7qztm\" (UID: \"6d656d18-bd80-4efa-b002-2e13a052ff06\") " pod="kube-system/kindnet-7qztm"
	Oct 29 09:09:04 old-k8s-version-096492 kubelet[1386]: I1029 09:09:04.768637    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34799f5c-3bdd-4fa6-be66-a77a7ebe00f8-lib-modules\") pod \"kube-proxy-8kpqf\" (UID: \"34799f5c-3bdd-4fa6-be66-a77a7ebe00f8\") " pod="kube-system/kube-proxy-8kpqf"
	Oct 29 09:09:04 old-k8s-version-096492 kubelet[1386]: I1029 09:09:04.768667    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkjbl\" (UniqueName: \"kubernetes.io/projected/34799f5c-3bdd-4fa6-be66-a77a7ebe00f8-kube-api-access-kkjbl\") pod \"kube-proxy-8kpqf\" (UID: \"34799f5c-3bdd-4fa6-be66-a77a7ebe00f8\") " pod="kube-system/kube-proxy-8kpqf"
	Oct 29 09:09:04 old-k8s-version-096492 kubelet[1386]: I1029 09:09:04.768727    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6d656d18-bd80-4efa-b002-2e13a052ff06-cni-cfg\") pod \"kindnet-7qztm\" (UID: \"6d656d18-bd80-4efa-b002-2e13a052ff06\") " pod="kube-system/kindnet-7qztm"
	Oct 29 09:09:06 old-k8s-version-096492 kubelet[1386]: I1029 09:09:06.898949    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8kpqf" podStartSLOduration=2.898881199 podCreationTimestamp="2025-10-29 09:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:09:05.490146384 +0000 UTC m=+13.190331665" watchObservedRunningTime="2025-10-29 09:09:06.898881199 +0000 UTC m=+14.599066481"
	Oct 29 09:09:07 old-k8s-version-096492 kubelet[1386]: I1029 09:09:07.487888    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-7qztm" podStartSLOduration=1.47072881 podCreationTimestamp="2025-10-29 09:09:04 +0000 UTC" firstStartedPulling="2025-10-29 09:09:05.106754854 +0000 UTC m=+12.806940126" lastFinishedPulling="2025-10-29 09:09:07.123856594 +0000 UTC m=+14.824041857" observedRunningTime="2025-10-29 09:09:07.487589854 +0000 UTC m=+15.187775154" watchObservedRunningTime="2025-10-29 09:09:07.487830541 +0000 UTC m=+15.188015823"
	Oct 29 09:09:17 old-k8s-version-096492 kubelet[1386]: I1029 09:09:17.754387    1386 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 29 09:09:17 old-k8s-version-096492 kubelet[1386]: I1029 09:09:17.779973    1386 topology_manager.go:215] "Topology Admit Handler" podUID="c73ffe63-3e51-47e1-a466-110f80cedb9d" podNamespace="kube-system" podName="coredns-5dd5756b68-v5mr5"
	Oct 29 09:09:17 old-k8s-version-096492 kubelet[1386]: I1029 09:09:17.780268    1386 topology_manager.go:215] "Topology Admit Handler" podUID="8e81d736-a277-4ca4-b50e-d930d86ab51e" podNamespace="kube-system" podName="storage-provisioner"
	Oct 29 09:09:17 old-k8s-version-096492 kubelet[1386]: I1029 09:09:17.863613    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6llk\" (UniqueName: \"kubernetes.io/projected/c73ffe63-3e51-47e1-a466-110f80cedb9d-kube-api-access-w6llk\") pod \"coredns-5dd5756b68-v5mr5\" (UID: \"c73ffe63-3e51-47e1-a466-110f80cedb9d\") " pod="kube-system/coredns-5dd5756b68-v5mr5"
	Oct 29 09:09:17 old-k8s-version-096492 kubelet[1386]: I1029 09:09:17.863679    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8e81d736-a277-4ca4-b50e-d930d86ab51e-tmp\") pod \"storage-provisioner\" (UID: \"8e81d736-a277-4ca4-b50e-d930d86ab51e\") " pod="kube-system/storage-provisioner"
	Oct 29 09:09:17 old-k8s-version-096492 kubelet[1386]: I1029 09:09:17.863809    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c73ffe63-3e51-47e1-a466-110f80cedb9d-config-volume\") pod \"coredns-5dd5756b68-v5mr5\" (UID: \"c73ffe63-3e51-47e1-a466-110f80cedb9d\") " pod="kube-system/coredns-5dd5756b68-v5mr5"
	Oct 29 09:09:17 old-k8s-version-096492 kubelet[1386]: I1029 09:09:17.863855    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cmpb\" (UniqueName: \"kubernetes.io/projected/8e81d736-a277-4ca4-b50e-d930d86ab51e-kube-api-access-4cmpb\") pod \"storage-provisioner\" (UID: \"8e81d736-a277-4ca4-b50e-d930d86ab51e\") " pod="kube-system/storage-provisioner"
	Oct 29 09:09:18 old-k8s-version-096492 kubelet[1386]: I1029 09:09:18.524602    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-v5mr5" podStartSLOduration=14.524541266 podCreationTimestamp="2025-10-29 09:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:09:18.52406232 +0000 UTC m=+26.224247600" watchObservedRunningTime="2025-10-29 09:09:18.524541266 +0000 UTC m=+26.224726545"
	Oct 29 09:09:18 old-k8s-version-096492 kubelet[1386]: I1029 09:09:18.524747    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.524711701 podCreationTimestamp="2025-10-29 09:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:09:18.512702432 +0000 UTC m=+26.212887713" watchObservedRunningTime="2025-10-29 09:09:18.524711701 +0000 UTC m=+26.224896981"
	Oct 29 09:09:20 old-k8s-version-096492 kubelet[1386]: I1029 09:09:20.385056    1386 topology_manager.go:215] "Topology Admit Handler" podUID="1fa1733a-b2ef-4af9-af8c-342513147d4e" podNamespace="default" podName="busybox"
	Oct 29 09:09:20 old-k8s-version-096492 kubelet[1386]: I1029 09:09:20.478471    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk5rh\" (UniqueName: \"kubernetes.io/projected/1fa1733a-b2ef-4af9-af8c-342513147d4e-kube-api-access-xk5rh\") pod \"busybox\" (UID: \"1fa1733a-b2ef-4af9-af8c-342513147d4e\") " pod="default/busybox"
	Oct 29 09:09:22 old-k8s-version-096492 kubelet[1386]: I1029 09:09:22.526979    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.709701806 podCreationTimestamp="2025-10-29 09:09:20 +0000 UTC" firstStartedPulling="2025-10-29 09:09:20.707598025 +0000 UTC m=+28.407783288" lastFinishedPulling="2025-10-29 09:09:21.524811483 +0000 UTC m=+29.224996762" observedRunningTime="2025-10-29 09:09:22.52657325 +0000 UTC m=+30.226758529" watchObservedRunningTime="2025-10-29 09:09:22.52691528 +0000 UTC m=+30.227100560"
	
	
	==> storage-provisioner [47f57ece3452daadd0ec3f03f3fd99b69cdc1b44cb051f736b30e18166150679] <==
	I1029 09:09:18.144430       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:09:18.154071       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:09:18.154123       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1029 09:09:18.164717       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:09:18.164787       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e7c8c98-8fd4-43b1-9dc7-61c97a398c0b", APIVersion:"v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-096492_86c61728-57c9-4b9b-8de2-fc13922a2a5a became leader
	I1029 09:09:18.164928       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-096492_86c61728-57c9-4b9b-8de2-fc13922a2a5a!
	I1029 09:09:18.265286       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-096492_86c61728-57c9-4b9b-8de2-fc13922a2a5a!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-096492 -n old-k8s-version-096492
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-096492 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-834228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-834228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (259.700463ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:09:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-834228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-834228 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-834228 describe deploy/metrics-server -n kube-system: exit status 1 (65.343241ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-834228 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-834228
helpers_test.go:243: (dbg) docker inspect embed-certs-834228:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1",
	        "Created": "2025-10-29T09:08:47.072061223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 293197,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:08:47.157197209Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1/hostname",
	        "HostsPath": "/var/lib/docker/containers/078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1/hosts",
	        "LogPath": "/var/lib/docker/containers/078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1/078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1-json.log",
	        "Name": "/embed-certs-834228",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-834228:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-834228",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1",
	                "LowerDir": "/var/lib/docker/overlay2/7acb3c11d92fdc34d3b3d58e00654a2a17b5843585f3e9de7e99b9f5cf5070f4-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7acb3c11d92fdc34d3b3d58e00654a2a17b5843585f3e9de7e99b9f5cf5070f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7acb3c11d92fdc34d3b3d58e00654a2a17b5843585f3e9de7e99b9f5cf5070f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7acb3c11d92fdc34d3b3d58e00654a2a17b5843585f3e9de7e99b9f5cf5070f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-834228",
	                "Source": "/var/lib/docker/volumes/embed-certs-834228/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-834228",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-834228",
	                "name.minikube.sigs.k8s.io": "embed-certs-834228",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9501fd1b38808114a90f6fdde5495207ae1cfd3ed71256c2d8cd04319b602e1e",
	            "SandboxKey": "/var/run/docker/netns/9501fd1b3880",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-834228": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:1e:a2:4f:24:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "86d19029abe0aa5f7ddaf42bf75485455d5c473387cb83ef6c0d4c78851e1205",
	                    "EndpointID": "cb72284dc4ac71899f5ae9fbdf0653ac30fb09a176e015024aef0977cddf9f9a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-834228",
	                        "078bf67023c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-834228 -n embed-certs-834228
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-834228 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-834228 logs -n 25: (1.023680922s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-240549 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo docker system info                                                                                                                                 │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cri-dockerd --version                                                                                                                              │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo containerd config dump                                                                                                                             │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo crio config                                                                                                                                        │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ delete  │ -p bridge-240549                                                                                                                                                         │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ delete  │ -p disable-driver-mounts-318335                                                                                                                                          │ disable-driver-mounts-318335 │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-096492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p old-k8s-version-096492 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-834228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:09:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:09:26.586398  302556 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:09:26.586683  302556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:09:26.586702  302556 out.go:374] Setting ErrFile to fd 2...
	I1029 09:09:26.586705  302556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:09:26.587046  302556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:09:26.587819  302556 out.go:368] Setting JSON to false
	I1029 09:09:26.589246  302556 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3115,"bootTime":1761725852,"procs":349,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:09:26.589311  302556 start.go:143] virtualization: kvm guest
	I1029 09:09:26.591194  302556 out.go:179] * [default-k8s-diff-port-017274] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:09:26.592802  302556 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:09:26.593053  302556 notify.go:221] Checking for updates...
	I1029 09:09:26.595353  302556 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:09:26.596548  302556 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:09:26.597654  302556 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:09:26.598716  302556 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:09:26.602237  302556 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:09:26.603973  302556 config.go:182] Loaded profile config "embed-certs-834228": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:09:26.604131  302556 config.go:182] Loaded profile config "no-preload-043790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:09:26.604202  302556 config.go:182] Loaded profile config "old-k8s-version-096492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1029 09:09:26.604301  302556 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:09:26.630525  302556 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:09:26.630664  302556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:09:26.690543  302556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-29 09:09:26.679954066 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:09:26.690659  302556 docker.go:319] overlay module found
	I1029 09:09:26.692453  302556 out.go:179] * Using the docker driver based on user configuration
	I1029 09:09:26.693655  302556 start.go:309] selected driver: docker
	I1029 09:09:26.693673  302556 start.go:930] validating driver "docker" against <nil>
	I1029 09:09:26.693686  302556 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:09:26.694285  302556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:09:26.755644  302556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-29 09:09:26.744845414 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:09:26.755830  302556 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 09:09:26.756121  302556 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:09:26.757833  302556 out.go:179] * Using Docker driver with root privileges
	I1029 09:09:26.758943  302556 cni.go:84] Creating CNI manager for ""
	I1029 09:09:26.759055  302556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:09:26.759068  302556 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:09:26.759147  302556 start.go:353] cluster config:
	{Name:default-k8s-diff-port-017274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:09:26.760424  302556 out.go:179] * Starting "default-k8s-diff-port-017274" primary control-plane node in "default-k8s-diff-port-017274" cluster
	I1029 09:09:26.761536  302556 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:09:26.762616  302556 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:09:26.763817  302556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:09:26.763879  302556 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:09:26.763907  302556 cache.go:59] Caching tarball of preloaded images
	I1029 09:09:26.763942  302556 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:09:26.764036  302556 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:09:26.764054  302556 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:09:26.764201  302556 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/config.json ...
	I1029 09:09:26.764232  302556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/config.json: {Name:mke336b64d933f60f421058bc59f599f614cb71e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:09:26.786506  302556 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:09:26.786533  302556 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:09:26.786551  302556 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:09:26.786581  302556 start.go:360] acquireMachinesLock for default-k8s-diff-port-017274: {Name:mkec68307c2ffe0cd4f9e8fcf3c8e2dc4c6d4bfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:09:26.786704  302556 start.go:364] duration metric: took 102.967µs to acquireMachinesLock for "default-k8s-diff-port-017274"
	I1029 09:09:26.786737  302556 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-017274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:09:26.786849  302556 start.go:125] createHost starting for "" (driver="docker")
	I1029 09:09:26.188355  292184 pod_ready.go:94] pod "kube-proxy-bxthb" is "Ready"
	I1029 09:09:26.188389  292184 pod_ready.go:86] duration metric: took 399.779532ms for pod "kube-proxy-bxthb" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.388706  292184 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.788345  292184 pod_ready.go:94] pod "kube-scheduler-embed-certs-834228" is "Ready"
	I1029 09:09:26.788388  292184 pod_ready.go:86] duration metric: took 399.616885ms for pod "kube-scheduler-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.788403  292184 pod_ready.go:40] duration metric: took 1.605163852s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:09:26.839284  292184 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:09:26.842361  292184 out.go:179] * Done! kubectl is now configured to use "embed-certs-834228" cluster and "default" namespace by default
	W1029 09:09:24.627967  287408 node_ready.go:57] node "no-preload-043790" has "Ready":"False" status (will retry)
	I1029 09:09:25.628113  287408 node_ready.go:49] node "no-preload-043790" is "Ready"
	I1029 09:09:25.628142  287408 node_ready.go:38] duration metric: took 13.503742028s for node "no-preload-043790" to be "Ready" ...
	I1029 09:09:25.628161  287408 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:09:25.628222  287408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:09:25.661909  287408 api_server.go:72] duration metric: took 13.910561121s to wait for apiserver process to appear ...
	I1029 09:09:25.661940  287408 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:09:25.661963  287408 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1029 09:09:25.669411  287408 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1029 09:09:25.670743  287408 api_server.go:141] control plane version: v1.34.1
	I1029 09:09:25.670789  287408 api_server.go:131] duration metric: took 8.839676ms to wait for apiserver health ...
	I1029 09:09:25.670800  287408 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:09:25.676908  287408 system_pods.go:59] 8 kube-system pods found
	I1029 09:09:25.676956  287408 system_pods.go:61] "coredns-66bc5c9577-bgslp" [8f0fcbc0-6872-42e0-a601-21fc1d777bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:09:25.676966  287408 system_pods.go:61] "etcd-no-preload-043790" [8021c438-763d-43d2-a61b-10a533eafb94] Running
	I1029 09:09:25.676974  287408 system_pods.go:61] "kindnet-dlrgv" [f12f7640-1309-4575-aa29-6f262b956f0a] Running
	I1029 09:09:25.676986  287408 system_pods.go:61] "kube-apiserver-no-preload-043790" [2633f749-fbf0-4a24-8fbb-574f6ac7d7a6] Running
	I1029 09:09:25.677028  287408 system_pods.go:61] "kube-controller-manager-no-preload-043790" [2810e859-4eda-4452-aa92-849c03b5f453] Running
	I1029 09:09:25.677033  287408 system_pods.go:61] "kube-proxy-7dc8p" [0ba63a1c-9709-4ebd-8ca2-664740d92a22] Running
	I1029 09:09:25.677038  287408 system_pods.go:61] "kube-scheduler-no-preload-043790" [ce9f47b3-0716-4567-8724-18d1ebc54ced] Running
	I1029 09:09:25.677045  287408 system_pods.go:61] "storage-provisioner" [224fa5f2-7b79-4a88-aff2-e3015c0eb63f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:09:25.677058  287408 system_pods.go:74] duration metric: took 6.25048ms to wait for pod list to return data ...
	I1029 09:09:25.677068  287408 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:09:25.680283  287408 default_sa.go:45] found service account: "default"
	I1029 09:09:25.680308  287408 default_sa.go:55] duration metric: took 3.233907ms for default service account to be created ...
	I1029 09:09:25.680319  287408 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:09:25.683528  287408 system_pods.go:86] 8 kube-system pods found
	I1029 09:09:25.683577  287408 system_pods.go:89] "coredns-66bc5c9577-bgslp" [8f0fcbc0-6872-42e0-a601-21fc1d777bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:09:25.683587  287408 system_pods.go:89] "etcd-no-preload-043790" [8021c438-763d-43d2-a61b-10a533eafb94] Running
	I1029 09:09:25.683594  287408 system_pods.go:89] "kindnet-dlrgv" [f12f7640-1309-4575-aa29-6f262b956f0a] Running
	I1029 09:09:25.683601  287408 system_pods.go:89] "kube-apiserver-no-preload-043790" [2633f749-fbf0-4a24-8fbb-574f6ac7d7a6] Running
	I1029 09:09:25.683608  287408 system_pods.go:89] "kube-controller-manager-no-preload-043790" [2810e859-4eda-4452-aa92-849c03b5f453] Running
	I1029 09:09:25.683613  287408 system_pods.go:89] "kube-proxy-7dc8p" [0ba63a1c-9709-4ebd-8ca2-664740d92a22] Running
	I1029 09:09:25.683618  287408 system_pods.go:89] "kube-scheduler-no-preload-043790" [ce9f47b3-0716-4567-8724-18d1ebc54ced] Running
	I1029 09:09:25.683626  287408 system_pods.go:89] "storage-provisioner" [224fa5f2-7b79-4a88-aff2-e3015c0eb63f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:09:25.683658  287408 retry.go:31] will retry after 280.052496ms: missing components: kube-dns
	I1029 09:09:25.967483  287408 system_pods.go:86] 8 kube-system pods found
	I1029 09:09:25.967520  287408 system_pods.go:89] "coredns-66bc5c9577-bgslp" [8f0fcbc0-6872-42e0-a601-21fc1d777bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:09:25.967527  287408 system_pods.go:89] "etcd-no-preload-043790" [8021c438-763d-43d2-a61b-10a533eafb94] Running
	I1029 09:09:25.967532  287408 system_pods.go:89] "kindnet-dlrgv" [f12f7640-1309-4575-aa29-6f262b956f0a] Running
	I1029 09:09:25.967536  287408 system_pods.go:89] "kube-apiserver-no-preload-043790" [2633f749-fbf0-4a24-8fbb-574f6ac7d7a6] Running
	I1029 09:09:25.967541  287408 system_pods.go:89] "kube-controller-manager-no-preload-043790" [2810e859-4eda-4452-aa92-849c03b5f453] Running
	I1029 09:09:25.967544  287408 system_pods.go:89] "kube-proxy-7dc8p" [0ba63a1c-9709-4ebd-8ca2-664740d92a22] Running
	I1029 09:09:25.967547  287408 system_pods.go:89] "kube-scheduler-no-preload-043790" [ce9f47b3-0716-4567-8724-18d1ebc54ced] Running
	I1029 09:09:25.967552  287408 system_pods.go:89] "storage-provisioner" [224fa5f2-7b79-4a88-aff2-e3015c0eb63f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:09:25.967567  287408 retry.go:31] will retry after 253.86945ms: missing components: kube-dns
	I1029 09:09:26.225761  287408 system_pods.go:86] 8 kube-system pods found
	I1029 09:09:26.225795  287408 system_pods.go:89] "coredns-66bc5c9577-bgslp" [8f0fcbc0-6872-42e0-a601-21fc1d777bc3] Running
	I1029 09:09:26.225803  287408 system_pods.go:89] "etcd-no-preload-043790" [8021c438-763d-43d2-a61b-10a533eafb94] Running
	I1029 09:09:26.225809  287408 system_pods.go:89] "kindnet-dlrgv" [f12f7640-1309-4575-aa29-6f262b956f0a] Running
	I1029 09:09:26.225813  287408 system_pods.go:89] "kube-apiserver-no-preload-043790" [2633f749-fbf0-4a24-8fbb-574f6ac7d7a6] Running
	I1029 09:09:26.225819  287408 system_pods.go:89] "kube-controller-manager-no-preload-043790" [2810e859-4eda-4452-aa92-849c03b5f453] Running
	I1029 09:09:26.225822  287408 system_pods.go:89] "kube-proxy-7dc8p" [0ba63a1c-9709-4ebd-8ca2-664740d92a22] Running
	I1029 09:09:26.225826  287408 system_pods.go:89] "kube-scheduler-no-preload-043790" [ce9f47b3-0716-4567-8724-18d1ebc54ced] Running
	I1029 09:09:26.225829  287408 system_pods.go:89] "storage-provisioner" [224fa5f2-7b79-4a88-aff2-e3015c0eb63f] Running
	I1029 09:09:26.225838  287408 system_pods.go:126] duration metric: took 545.513139ms to wait for k8s-apps to be running ...
	I1029 09:09:26.225847  287408 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:09:26.225910  287408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:09:26.241077  287408 system_svc.go:56] duration metric: took 15.218505ms WaitForService to wait for kubelet
	I1029 09:09:26.241119  287408 kubeadm.go:587] duration metric: took 14.48978974s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:09:26.241143  287408 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:09:26.244412  287408 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 09:09:26.244448  287408 node_conditions.go:123] node cpu capacity is 8
	I1029 09:09:26.244465  287408 node_conditions.go:105] duration metric: took 3.314653ms to run NodePressure ...
	I1029 09:09:26.244479  287408 start.go:242] waiting for startup goroutines ...
	I1029 09:09:26.244488  287408 start.go:247] waiting for cluster config update ...
	I1029 09:09:26.244504  287408 start.go:256] writing updated cluster config ...
	I1029 09:09:26.244877  287408 ssh_runner.go:195] Run: rm -f paused
	I1029 09:09:26.249655  287408 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:09:26.254294  287408 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bgslp" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.259361  287408 pod_ready.go:94] pod "coredns-66bc5c9577-bgslp" is "Ready"
	I1029 09:09:26.259388  287408 pod_ready.go:86] duration metric: took 5.060691ms for pod "coredns-66bc5c9577-bgslp" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.261898  287408 pod_ready.go:83] waiting for pod "etcd-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.266636  287408 pod_ready.go:94] pod "etcd-no-preload-043790" is "Ready"
	I1029 09:09:26.266663  287408 pod_ready.go:86] duration metric: took 4.740634ms for pod "etcd-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.269175  287408 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.273624  287408 pod_ready.go:94] pod "kube-apiserver-no-preload-043790" is "Ready"
	I1029 09:09:26.273649  287408 pod_ready.go:86] duration metric: took 4.450389ms for pod "kube-apiserver-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.275707  287408 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.655203  287408 pod_ready.go:94] pod "kube-controller-manager-no-preload-043790" is "Ready"
	I1029 09:09:26.655236  287408 pod_ready.go:86] duration metric: took 379.505293ms for pod "kube-controller-manager-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.855462  287408 pod_ready.go:83] waiting for pod "kube-proxy-7dc8p" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:27.254802  287408 pod_ready.go:94] pod "kube-proxy-7dc8p" is "Ready"
	I1029 09:09:27.254827  287408 pod_ready.go:86] duration metric: took 399.334643ms for pod "kube-proxy-7dc8p" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:27.454125  287408 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:27.853871  287408 pod_ready.go:94] pod "kube-scheduler-no-preload-043790" is "Ready"
	I1029 09:09:27.853895  287408 pod_ready.go:86] duration metric: took 399.7441ms for pod "kube-scheduler-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:27.853907  287408 pod_ready.go:40] duration metric: took 1.604212036s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:09:27.904823  287408 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:09:27.909160  287408 out.go:179] * Done! kubectl is now configured to use "no-preload-043790" cluster and "default" namespace by default
	I1029 09:09:26.789011  302556 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1029 09:09:26.789307  302556 start.go:159] libmachine.API.Create for "default-k8s-diff-port-017274" (driver="docker")
	I1029 09:09:26.789366  302556 client.go:173] LocalClient.Create starting
	I1029 09:09:26.789453  302556 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem
	I1029 09:09:26.789496  302556 main.go:143] libmachine: Decoding PEM data...
	I1029 09:09:26.789523  302556 main.go:143] libmachine: Parsing certificate...
	I1029 09:09:26.789596  302556 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem
	I1029 09:09:26.789620  302556 main.go:143] libmachine: Decoding PEM data...
	I1029 09:09:26.789631  302556 main.go:143] libmachine: Parsing certificate...
	I1029 09:09:26.789962  302556 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-017274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1029 09:09:26.810730  302556 cli_runner.go:211] docker network inspect default-k8s-diff-port-017274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1029 09:09:26.810829  302556 network_create.go:284] running [docker network inspect default-k8s-diff-port-017274] to gather additional debugging logs...
	I1029 09:09:26.810853  302556 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-017274
	W1029 09:09:26.830745  302556 cli_runner.go:211] docker network inspect default-k8s-diff-port-017274 returned with exit code 1
	I1029 09:09:26.830786  302556 network_create.go:287] error running [docker network inspect default-k8s-diff-port-017274]: docker network inspect default-k8s-diff-port-017274: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-017274 not found
	I1029 09:09:26.830805  302556 network_create.go:289] output of [docker network inspect default-k8s-diff-port-017274]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-017274 not found
	
	** /stderr **
	I1029 09:09:26.830891  302556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:09:26.851280  302556 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b27c046ec42 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:96:bc:cb:4a:50:f2} reservation:<nil>}
	I1029 09:09:26.852100  302556 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0c15025939eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:79:05:d8:32:73} reservation:<nil>}
	I1029 09:09:26.852947  302556 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5e92a9c19423 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:ec:bb:72:ab:23} reservation:<nil>}
	I1029 09:09:26.853866  302556 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-86d19029abe0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:37:1e:54:39:51} reservation:<nil>}
	I1029 09:09:26.854736  302556 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-e1d4705eea87 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ce:b7:20:c7:ca:ab} reservation:<nil>}
	I1029 09:09:26.855593  302556 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-dcc575c7384e IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:c6:b4:23:94:8e:82} reservation:<nil>}
	I1029 09:09:26.856598  302556 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f1aff0}
	I1029 09:09:26.856627  302556 network_create.go:124] attempt to create docker network default-k8s-diff-port-017274 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1029 09:09:26.856705  302556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-017274 default-k8s-diff-port-017274
	I1029 09:09:26.925107  302556 network_create.go:108] docker network default-k8s-diff-port-017274 192.168.103.0/24 created
	I1029 09:09:26.925158  302556 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-017274" container
	I1029 09:09:26.925221  302556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1029 09:09:26.944359  302556 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-017274 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-017274 --label created_by.minikube.sigs.k8s.io=true
	I1029 09:09:26.966333  302556 oci.go:103] Successfully created a docker volume default-k8s-diff-port-017274
	I1029 09:09:26.966433  302556 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-017274-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-017274 --entrypoint /usr/bin/test -v default-k8s-diff-port-017274:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1029 09:09:27.387755  302556 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-017274
	I1029 09:09:27.387808  302556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:09:27.387839  302556 kic.go:194] Starting extracting preloaded images to volume ...
	I1029 09:09:27.387949  302556 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-017274:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 29 09:09:24 embed-certs-834228 crio[770]: time="2025-10-29T09:09:24.687952139Z" level=info msg="Starting container: 5d831d5a9eeb9cfae3b8da7d770343e6c9aeac947412d6ebb165d2d9affd2078" id=2bf1996f-d974-4596-bad5-00dab7cdc60f name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:09:24 embed-certs-834228 crio[770]: time="2025-10-29T09:09:24.690360737Z" level=info msg="Started container" PID=1823 containerID=5d831d5a9eeb9cfae3b8da7d770343e6c9aeac947412d6ebb165d2d9affd2078 description=kube-system/coredns-66bc5c9577-w9vf6/coredns id=2bf1996f-d974-4596-bad5-00dab7cdc60f name=/runtime.v1.RuntimeService/StartContainer sandboxID=19245c6ef81e11f9b0a3ddcba35476dd544fbad0f2debc55fc3644bb7613c166
	Oct 29 09:09:27 embed-certs-834228 crio[770]: time="2025-10-29T09:09:27.350854064Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a108c8a7-0459-4101-b249-2bcb1a754eae name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:09:27 embed-certs-834228 crio[770]: time="2025-10-29T09:09:27.350932143Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:09:27 embed-certs-834228 crio[770]: time="2025-10-29T09:09:27.359301471Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1753c224ee1576a12e09ca80778788115ffd1e5eada87ef1cf064925686f0a95 UID:47adfe3b-59ee-4d67-8d34-eb88528af861 NetNS:/var/run/netns/bec362d0-f45f-4dfe-b6c0-860539182946 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00012aa48}] Aliases:map[]}"
	Oct 29 09:09:27 embed-certs-834228 crio[770]: time="2025-10-29T09:09:27.359355514Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 29 09:09:27 embed-certs-834228 crio[770]: time="2025-10-29T09:09:27.371949615Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1753c224ee1576a12e09ca80778788115ffd1e5eada87ef1cf064925686f0a95 UID:47adfe3b-59ee-4d67-8d34-eb88528af861 NetNS:/var/run/netns/bec362d0-f45f-4dfe-b6c0-860539182946 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00012aa48}] Aliases:map[]}"
	Oct 29 09:09:27 embed-certs-834228 crio[770]: time="2025-10-29T09:09:27.372106473Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 29 09:09:27 embed-certs-834228 crio[770]: time="2025-10-29T09:09:27.374166684Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 29 09:09:27 embed-certs-834228 crio[770]: time="2025-10-29T09:09:27.375091727Z" level=info msg="Ran pod sandbox 1753c224ee1576a12e09ca80778788115ffd1e5eada87ef1cf064925686f0a95 with infra container: default/busybox/POD" id=a108c8a7-0459-4101-b249-2bcb1a754eae name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:09:27 embed-certs-834228 crio[770]: time="2025-10-29T09:09:27.376528196Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9cbc66ef-bcb0-460a-a002-54c57293d3c5 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:09:27 embed-certs-834228 crio[770]: time="2025-10-29T09:09:27.37675302Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9cbc66ef-bcb0-460a-a002-54c57293d3c5 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:09:27 embed-certs-834228 crio[770]: time="2025-10-29T09:09:27.37679006Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9cbc66ef-bcb0-460a-a002-54c57293d3c5 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:09:27 embed-certs-834228 crio[770]: time="2025-10-29T09:09:27.377580028Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=29083bf6-9e86-41f0-9c39-7eb74f1a484e name=/runtime.v1.ImageService/PullImage
	Oct 29 09:09:27 embed-certs-834228 crio[770]: time="2025-10-29T09:09:27.379751574Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 29 09:09:28 embed-certs-834228 crio[770]: time="2025-10-29T09:09:28.149816618Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=29083bf6-9e86-41f0-9c39-7eb74f1a484e name=/runtime.v1.ImageService/PullImage
	Oct 29 09:09:28 embed-certs-834228 crio[770]: time="2025-10-29T09:09:28.15073355Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3876a62d-1e3e-477c-9987-8a2213298030 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:09:28 embed-certs-834228 crio[770]: time="2025-10-29T09:09:28.152571896Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2f56d207-eecd-4b32-9de7-6cbb2ceca6a3 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:09:28 embed-certs-834228 crio[770]: time="2025-10-29T09:09:28.15619313Z" level=info msg="Creating container: default/busybox/busybox" id=ced7b950-877d-4939-823a-2d263227df79 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:09:28 embed-certs-834228 crio[770]: time="2025-10-29T09:09:28.156340676Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:09:28 embed-certs-834228 crio[770]: time="2025-10-29T09:09:28.160271483Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:09:28 embed-certs-834228 crio[770]: time="2025-10-29T09:09:28.160670691Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:09:28 embed-certs-834228 crio[770]: time="2025-10-29T09:09:28.212193722Z" level=info msg="Created container 1669fa0e279fafdc34275cf0f771f448027dd9be204b0be32d0b7259e4707d04: default/busybox/busybox" id=ced7b950-877d-4939-823a-2d263227df79 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:09:28 embed-certs-834228 crio[770]: time="2025-10-29T09:09:28.212957506Z" level=info msg="Starting container: 1669fa0e279fafdc34275cf0f771f448027dd9be204b0be32d0b7259e4707d04" id=eff62efc-e3a6-4b66-b414-aa2e432b74ce name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:09:28 embed-certs-834228 crio[770]: time="2025-10-29T09:09:28.21534016Z" level=info msg="Started container" PID=1900 containerID=1669fa0e279fafdc34275cf0f771f448027dd9be204b0be32d0b7259e4707d04 description=default/busybox/busybox id=eff62efc-e3a6-4b66-b414-aa2e432b74ce name=/runtime.v1.RuntimeService/StartContainer sandboxID=1753c224ee1576a12e09ca80778788115ffd1e5eada87ef1cf064925686f0a95
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	1669fa0e279fa       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   1753c224ee157       busybox                                      default
	5d831d5a9eeb9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   19245c6ef81e1       coredns-66bc5c9577-w9vf6                     kube-system
	1d4e029501e91       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   fb402d2846312       storage-provisioner                          kube-system
	39ec3dab384cc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   37ad8e99d8973       kindnet-dgkfz                                kube-system
	6b053551bea0a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   196c1af92f69c       kube-proxy-bxthb                             kube-system
	21b693d968cad       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   f50e55cdbd6b5       kube-scheduler-embed-certs-834228            kube-system
	8906dc244641e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   482aecd690ba6       kube-controller-manager-embed-certs-834228   kube-system
	3ee8b9b40964d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   2657dc47c073b       etcd-embed-certs-834228                      kube-system
	5311f6bddaf0f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   1beea12cc20ee       kube-apiserver-embed-certs-834228            kube-system
	
	
	==> coredns [5d831d5a9eeb9cfae3b8da7d770343e6c9aeac947412d6ebb165d2d9affd2078] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59833 - 10154 "HINFO IN 1866924652265051118.6899185776583761953. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.116302193s
	
	
	==> describe nodes <==
	Name:               embed-certs-834228
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-834228
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=embed-certs-834228
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_09_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:09:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-834228
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:09:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:09:24 +0000   Wed, 29 Oct 2025 09:09:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:09:24 +0000   Wed, 29 Oct 2025 09:09:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:09:24 +0000   Wed, 29 Oct 2025 09:09:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:09:24 +0000   Wed, 29 Oct 2025 09:09:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-834228
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                d617b188-ae12-430b-83d6-e9ef5bc4858e
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-w9vf6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-834228                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-dgkfz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-834228             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-834228    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-bxthb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-834228             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node embed-certs-834228 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node embed-certs-834228 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node embed-certs-834228 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node embed-certs-834228 event: Registered Node embed-certs-834228 in Controller
	  Normal  NodeReady                12s   kubelet          Node embed-certs-834228 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[  +7.860471] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea d1 bc 55 fa d3 08 06
	[  +0.057230] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[  +7.379065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 86 de 12 5c b8 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[Oct29 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 3e ab f0 10 3c 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 16 ea 00 8c 1d 08 06
	[  +4.650960] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 a3 db 56 0e bd 08 06
	[  +0.000357] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[ +10.158654] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	[Oct29 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 62 44 12 a2 6e 08 06
	[  +0.000472] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	
	
	==> etcd [3ee8b9b40964d002db308b48c9eaf7e90ead6d1447a2f5af468878d5944718e9] <==
	{"level":"warn","ts":"2025-10-29T09:09:04.011925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.019706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.029536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.038980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.045559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.052145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.059873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.067873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.076414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.084358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.101189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.109479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.117110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.126253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.142188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.149614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.157736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.165443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.173121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.181845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.207110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.216524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.224360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:04.291839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42336","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-29T09:09:30.887804Z","caller":"traceutil/trace.go:172","msg":"trace[2047924987] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"141.784608ms","start":"2025-10-29T09:09:30.745976Z","end":"2025-10-29T09:09:30.887760Z","steps":["trace[2047924987] 'process raft request'  (duration: 141.665739ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:09:36 up 52 min,  0 user,  load average: 6.46, 4.08, 2.47
	Linux embed-certs-834228 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [39ec3dab384cc0cddc41ad846c352eb65c57e26ec169375af7635647aa93d63d] <==
	I1029 09:09:13.699122       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:09:13.699552       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1029 09:09:13.699751       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:09:13.699774       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:09:13.699789       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:09:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:09:13.998942       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:09:13.998976       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:09:13.999001       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:09:14.000031       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:09:14.299249       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:09:14.299276       1 metrics.go:72] Registering metrics
	I1029 09:09:14.299344       1 controller.go:711] "Syncing nftables rules"
	I1029 09:09:24.004260       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:09:24.004327       1 main.go:301] handling current node
	I1029 09:09:33.999586       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:09:33.999656       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5311f6bddaf0fd253e1c22998fcd96c4da9c9e5521ed1b19f75866ea4dc25e11] <==
	I1029 09:09:04.939134       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:09:04.947384       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1029 09:09:04.948495       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:09:04.959063       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1029 09:09:04.967425       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:09:04.968923       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:09:05.143287       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:09:05.832791       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1029 09:09:05.838237       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1029 09:09:05.838259       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:09:06.626827       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:09:06.681046       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:09:06.741803       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1029 09:09:06.752128       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1029 09:09:06.753756       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:09:06.759678       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:09:06.922542       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:09:07.929626       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:09:07.940782       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1029 09:09:07.950344       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 09:09:12.033842       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:09:12.053063       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:09:12.726295       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:09:12.978296       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1029 09:09:35.132321       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:53748: use of closed network connection
	
	
	==> kube-controller-manager [8906dc244641e1dca82ab35ba736ff70d6d76a28628fe56d59bd9e37c64b156f] <==
	I1029 09:09:11.924746       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:09:11.925167       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 09:09:11.925387       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:09:11.925417       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:09:11.925541       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 09:09:11.925590       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 09:09:11.925617       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:09:11.925659       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1029 09:09:11.925866       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1029 09:09:11.929441       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1029 09:09:11.930903       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:09:11.933281       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 09:09:11.933487       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 09:09:11.933673       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-834228"
	I1029 09:09:11.934618       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1029 09:09:11.935023       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1029 09:09:11.937320       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1029 09:09:11.940640       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:09:11.943572       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1029 09:09:11.943610       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:09:11.967664       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:09:11.970911       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:09:11.970935       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:09:11.970944       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:09:26.936572       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6b053551bea0ab955f3c0a0e59fbfc41c6e3c353760a6b6e3e8422fecd3c629f] <==
	I1029 09:09:13.520492       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:09:13.599587       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:09:13.700901       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:09:13.700942       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1029 09:09:13.701080       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:09:13.731503       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:09:13.731609       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:09:13.740286       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:09:13.740827       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:09:13.740852       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:09:13.742530       1 config.go:200] "Starting service config controller"
	I1029 09:09:13.742548       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:09:13.742598       1 config.go:309] "Starting node config controller"
	I1029 09:09:13.742605       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:09:13.742611       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:09:13.743043       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:09:13.743086       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:09:13.743121       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:09:13.743126       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:09:13.843145       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:09:13.843145       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 09:09:13.844478       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [21b693d968cad2bc8fcbd2248cdadca7bfc7c011d6425d319fe8ce4af8cc9722] <==
	E1029 09:09:04.977521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 09:09:04.977918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 09:09:04.989832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1029 09:09:04.990846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 09:09:04.991986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 09:09:04.992159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 09:09:04.992723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 09:09:04.992784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 09:09:05.833450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 09:09:05.960803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 09:09:06.026463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 09:09:06.061768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 09:09:06.088721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:09:06.100911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:09:06.127065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 09:09:06.174689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 09:09:06.218055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 09:09:06.230931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1029 09:09:06.239305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 09:09:06.258139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 09:09:06.259695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 09:09:06.298744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 09:09:06.351541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 09:09:06.363022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1029 09:09:08.264064       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:09:08 embed-certs-834228 kubelet[1304]: E1029 09:09:08.800082    1304 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-834228\" already exists" pod="kube-system/etcd-embed-certs-834228"
	Oct 29 09:09:08 embed-certs-834228 kubelet[1304]: I1029 09:09:08.850236    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-834228" podStartSLOduration=1.8502101720000002 podStartE2EDuration="1.850210172s" podCreationTimestamp="2025-10-29 09:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:09:08.838504208 +0000 UTC m=+1.164790693" watchObservedRunningTime="2025-10-29 09:09:08.850210172 +0000 UTC m=+1.176496657"
	Oct 29 09:09:08 embed-certs-834228 kubelet[1304]: I1029 09:09:08.859797    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-834228" podStartSLOduration=1.8597748950000001 podStartE2EDuration="1.859774895s" podCreationTimestamp="2025-10-29 09:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:09:08.850518855 +0000 UTC m=+1.176805341" watchObservedRunningTime="2025-10-29 09:09:08.859774895 +0000 UTC m=+1.186061381"
	Oct 29 09:09:08 embed-certs-834228 kubelet[1304]: I1029 09:09:08.871873    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-834228" podStartSLOduration=1.871845447 podStartE2EDuration="1.871845447s" podCreationTimestamp="2025-10-29 09:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:09:08.860015795 +0000 UTC m=+1.186302258" watchObservedRunningTime="2025-10-29 09:09:08.871845447 +0000 UTC m=+1.198131932"
	Oct 29 09:09:08 embed-certs-834228 kubelet[1304]: I1029 09:09:08.885627    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-834228" podStartSLOduration=1.8856015990000001 podStartE2EDuration="1.885601599s" podCreationTimestamp="2025-10-29 09:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:09:08.872812567 +0000 UTC m=+1.199099052" watchObservedRunningTime="2025-10-29 09:09:08.885601599 +0000 UTC m=+1.211888086"
	Oct 29 09:09:11 embed-certs-834228 kubelet[1304]: I1029 09:09:11.898817    1304 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 29 09:09:11 embed-certs-834228 kubelet[1304]: I1029 09:09:11.903708    1304 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 29 09:09:13 embed-certs-834228 kubelet[1304]: I1029 09:09:13.092547    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6616e889-1d54-48f4-9239-12fdc19fd542-lib-modules\") pod \"kindnet-dgkfz\" (UID: \"6616e889-1d54-48f4-9239-12fdc19fd542\") " pod="kube-system/kindnet-dgkfz"
	Oct 29 09:09:13 embed-certs-834228 kubelet[1304]: I1029 09:09:13.093757    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e97c02a-d3b4-4b2a-9ac5-ea6cc572848f-xtables-lock\") pod \"kube-proxy-bxthb\" (UID: \"9e97c02a-d3b4-4b2a-9ac5-ea6cc572848f\") " pod="kube-system/kube-proxy-bxthb"
	Oct 29 09:09:13 embed-certs-834228 kubelet[1304]: I1029 09:09:13.093795    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e97c02a-d3b4-4b2a-9ac5-ea6cc572848f-lib-modules\") pod \"kube-proxy-bxthb\" (UID: \"9e97c02a-d3b4-4b2a-9ac5-ea6cc572848f\") " pod="kube-system/kube-proxy-bxthb"
	Oct 29 09:09:13 embed-certs-834228 kubelet[1304]: I1029 09:09:13.093820    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6616e889-1d54-48f4-9239-12fdc19fd542-cni-cfg\") pod \"kindnet-dgkfz\" (UID: \"6616e889-1d54-48f4-9239-12fdc19fd542\") " pod="kube-system/kindnet-dgkfz"
	Oct 29 09:09:13 embed-certs-834228 kubelet[1304]: I1029 09:09:13.093848    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9e97c02a-d3b4-4b2a-9ac5-ea6cc572848f-kube-proxy\") pod \"kube-proxy-bxthb\" (UID: \"9e97c02a-d3b4-4b2a-9ac5-ea6cc572848f\") " pod="kube-system/kube-proxy-bxthb"
	Oct 29 09:09:13 embed-certs-834228 kubelet[1304]: I1029 09:09:13.093876    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6kts\" (UniqueName: \"kubernetes.io/projected/9e97c02a-d3b4-4b2a-9ac5-ea6cc572848f-kube-api-access-j6kts\") pod \"kube-proxy-bxthb\" (UID: \"9e97c02a-d3b4-4b2a-9ac5-ea6cc572848f\") " pod="kube-system/kube-proxy-bxthb"
	Oct 29 09:09:13 embed-certs-834228 kubelet[1304]: I1029 09:09:13.093908    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6616e889-1d54-48f4-9239-12fdc19fd542-xtables-lock\") pod \"kindnet-dgkfz\" (UID: \"6616e889-1d54-48f4-9239-12fdc19fd542\") " pod="kube-system/kindnet-dgkfz"
	Oct 29 09:09:13 embed-certs-834228 kubelet[1304]: I1029 09:09:13.093930    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9js4p\" (UniqueName: \"kubernetes.io/projected/6616e889-1d54-48f4-9239-12fdc19fd542-kube-api-access-9js4p\") pod \"kindnet-dgkfz\" (UID: \"6616e889-1d54-48f4-9239-12fdc19fd542\") " pod="kube-system/kindnet-dgkfz"
	Oct 29 09:09:13 embed-certs-834228 kubelet[1304]: I1029 09:09:13.915068    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dgkfz" podStartSLOduration=1.915038021 podStartE2EDuration="1.915038021s" podCreationTimestamp="2025-10-29 09:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:09:13.899658785 +0000 UTC m=+6.225945272" watchObservedRunningTime="2025-10-29 09:09:13.915038021 +0000 UTC m=+6.241324505"
	Oct 29 09:09:14 embed-certs-834228 kubelet[1304]: I1029 09:09:14.006416    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bxthb" podStartSLOduration=2.006389438 podStartE2EDuration="2.006389438s" podCreationTimestamp="2025-10-29 09:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:09:13.915936028 +0000 UTC m=+6.242222513" watchObservedRunningTime="2025-10-29 09:09:14.006389438 +0000 UTC m=+6.332675925"
	Oct 29 09:09:24 embed-certs-834228 kubelet[1304]: I1029 09:09:24.296251    1304 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 29 09:09:24 embed-certs-834228 kubelet[1304]: I1029 09:09:24.378260    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxb95\" (UniqueName: \"kubernetes.io/projected/cbc8bcae-4373-412e-a597-5e2af9bbabea-kube-api-access-fxb95\") pod \"storage-provisioner\" (UID: \"cbc8bcae-4373-412e-a597-5e2af9bbabea\") " pod="kube-system/storage-provisioner"
	Oct 29 09:09:24 embed-certs-834228 kubelet[1304]: I1029 09:09:24.378323    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cbc8bcae-4373-412e-a597-5e2af9bbabea-tmp\") pod \"storage-provisioner\" (UID: \"cbc8bcae-4373-412e-a597-5e2af9bbabea\") " pod="kube-system/storage-provisioner"
	Oct 29 09:09:24 embed-certs-834228 kubelet[1304]: I1029 09:09:24.378354    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vgtn\" (UniqueName: \"kubernetes.io/projected/a9ebd931-6ce6-4d23-b24c-ee0e6037096b-kube-api-access-8vgtn\") pod \"coredns-66bc5c9577-w9vf6\" (UID: \"a9ebd931-6ce6-4d23-b24c-ee0e6037096b\") " pod="kube-system/coredns-66bc5c9577-w9vf6"
	Oct 29 09:09:24 embed-certs-834228 kubelet[1304]: I1029 09:09:24.378471    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9ebd931-6ce6-4d23-b24c-ee0e6037096b-config-volume\") pod \"coredns-66bc5c9577-w9vf6\" (UID: \"a9ebd931-6ce6-4d23-b24c-ee0e6037096b\") " pod="kube-system/coredns-66bc5c9577-w9vf6"
	Oct 29 09:09:24 embed-certs-834228 kubelet[1304]: I1029 09:09:24.843081    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w9vf6" podStartSLOduration=11.843048323 podStartE2EDuration="11.843048323s" podCreationTimestamp="2025-10-29 09:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:09:24.842776371 +0000 UTC m=+17.169062857" watchObservedRunningTime="2025-10-29 09:09:24.843048323 +0000 UTC m=+17.169334809"
	Oct 29 09:09:24 embed-certs-834228 kubelet[1304]: I1029 09:09:24.868703    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.86867783 podStartE2EDuration="11.86867783s" podCreationTimestamp="2025-10-29 09:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:09:24.868627049 +0000 UTC m=+17.194913534" watchObservedRunningTime="2025-10-29 09:09:24.86867783 +0000 UTC m=+17.194964303"
	Oct 29 09:09:27 embed-certs-834228 kubelet[1304]: I1029 09:09:27.095592    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb6dz\" (UniqueName: \"kubernetes.io/projected/47adfe3b-59ee-4d67-8d34-eb88528af861-kube-api-access-jb6dz\") pod \"busybox\" (UID: \"47adfe3b-59ee-4d67-8d34-eb88528af861\") " pod="default/busybox"
	
	
	==> storage-provisioner [1d4e029501e91ade20348d2f3ba964b18396119729684c1310b2fc754d0581a0] <==
	I1029 09:09:24.696409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:09:24.705739       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:09:24.705785       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:09:24.708453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:24.713491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:09:24.713711       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:09:24.713856       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b974d585-d04f-4d47-a5da-d6dd7320fe4f", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-834228_2dc4c225-9491-4b4d-bc68-af5bf6ad92ad became leader
	I1029 09:09:24.713901       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-834228_2dc4c225-9491-4b4d-bc68-af5bf6ad92ad!
	W1029 09:09:24.716263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:24.720762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:09:24.814253       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-834228_2dc4c225-9491-4b4d-bc68-af5bf6ad92ad!
	W1029 09:09:26.724523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:26.729787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:28.732961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:28.739618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:30.743565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:30.889046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:32.894334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:32.899439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:34.903195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:34.907157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-834228 -n embed-certs-834228
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-834228 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-043790 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-043790 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (277.730141ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:09:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-043790 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-043790 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-043790 describe deploy/metrics-server -n kube-system: exit status 1 (59.708706ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-043790 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-043790
helpers_test.go:243: (dbg) docker inspect no-preload-043790:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7",
	        "Created": "2025-10-29T09:08:34.171867381Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288595,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:08:34.222506338Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7/hostname",
	        "HostsPath": "/var/lib/docker/containers/b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7/hosts",
	        "LogPath": "/var/lib/docker/containers/b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7/b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7-json.log",
	        "Name": "/no-preload-043790",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-043790:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-043790",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7",
	                "LowerDir": "/var/lib/docker/overlay2/58d942a653991abae69a0bdf6841492fca5fc3fd6fabad6f0db77f0268252ce7-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58d942a653991abae69a0bdf6841492fca5fc3fd6fabad6f0db77f0268252ce7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58d942a653991abae69a0bdf6841492fca5fc3fd6fabad6f0db77f0268252ce7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58d942a653991abae69a0bdf6841492fca5fc3fd6fabad6f0db77f0268252ce7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-043790",
	                "Source": "/var/lib/docker/volumes/no-preload-043790/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-043790",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-043790",
	                "name.minikube.sigs.k8s.io": "no-preload-043790",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "955d6f9e45157993f3ea5700b6258567fa5844c8ba31c7a6bc379c81e1e883db",
	            "SandboxKey": "/var/run/docker/netns/955d6f9e4515",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-043790": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:75:61:8d:e9:07",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dcc575c7384eba10361bed7adc7ddf8a9bfff63d366b63895fcc568dd1c4ba1d",
	                    "EndpointID": "c2e3bb782a916a9c07da3e14130022ac794ffc2c823101a9aa1097194f23c738",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-043790",
	                        "b2e7560bb45a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-043790 -n no-preload-043790
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-043790 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-043790 logs -n 25: (1.090306793s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-240549 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo docker system info                                                                                                                                 │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cri-dockerd --version                                                                                                                              │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo containerd config dump                                                                                                                             │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo crio config                                                                                                                                        │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ delete  │ -p bridge-240549                                                                                                                                                         │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ delete  │ -p disable-driver-mounts-318335                                                                                                                                          │ disable-driver-mounts-318335 │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-096492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p old-k8s-version-096492 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-834228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-043790 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p embed-certs-834228 --alsologtostderr -v=3                                                                                                                             │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:09:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:09:26.586398  302556 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:09:26.586683  302556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:09:26.586702  302556 out.go:374] Setting ErrFile to fd 2...
	I1029 09:09:26.586705  302556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:09:26.587046  302556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:09:26.587819  302556 out.go:368] Setting JSON to false
	I1029 09:09:26.589246  302556 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3115,"bootTime":1761725852,"procs":349,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:09:26.589311  302556 start.go:143] virtualization: kvm guest
	I1029 09:09:26.591194  302556 out.go:179] * [default-k8s-diff-port-017274] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:09:26.592802  302556 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:09:26.593053  302556 notify.go:221] Checking for updates...
	I1029 09:09:26.595353  302556 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:09:26.596548  302556 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:09:26.597654  302556 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:09:26.598716  302556 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:09:26.602237  302556 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:09:26.603973  302556 config.go:182] Loaded profile config "embed-certs-834228": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:09:26.604131  302556 config.go:182] Loaded profile config "no-preload-043790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:09:26.604202  302556 config.go:182] Loaded profile config "old-k8s-version-096492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1029 09:09:26.604301  302556 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:09:26.630525  302556 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:09:26.630664  302556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:09:26.690543  302556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-29 09:09:26.679954066 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:09:26.690659  302556 docker.go:319] overlay module found
	I1029 09:09:26.692453  302556 out.go:179] * Using the docker driver based on user configuration
	I1029 09:09:26.693655  302556 start.go:309] selected driver: docker
	I1029 09:09:26.693673  302556 start.go:930] validating driver "docker" against <nil>
	I1029 09:09:26.693686  302556 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:09:26.694285  302556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:09:26.755644  302556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-29 09:09:26.744845414 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:09:26.755830  302556 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 09:09:26.756121  302556 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:09:26.757833  302556 out.go:179] * Using Docker driver with root privileges
	I1029 09:09:26.758943  302556 cni.go:84] Creating CNI manager for ""
	I1029 09:09:26.759055  302556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:09:26.759068  302556 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:09:26.759147  302556 start.go:353] cluster config:
	{Name:default-k8s-diff-port-017274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:09:26.760424  302556 out.go:179] * Starting "default-k8s-diff-port-017274" primary control-plane node in "default-k8s-diff-port-017274" cluster
	I1029 09:09:26.761536  302556 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:09:26.762616  302556 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:09:26.763817  302556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:09:26.763879  302556 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:09:26.763907  302556 cache.go:59] Caching tarball of preloaded images
	I1029 09:09:26.763942  302556 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:09:26.764036  302556 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:09:26.764054  302556 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:09:26.764201  302556 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/config.json ...
	I1029 09:09:26.764232  302556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/config.json: {Name:mke336b64d933f60f421058bc59f599f614cb71e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:09:26.786506  302556 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:09:26.786533  302556 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:09:26.786551  302556 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:09:26.786581  302556 start.go:360] acquireMachinesLock for default-k8s-diff-port-017274: {Name:mkec68307c2ffe0cd4f9e8fcf3c8e2dc4c6d4bfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:09:26.786704  302556 start.go:364] duration metric: took 102.967µs to acquireMachinesLock for "default-k8s-diff-port-017274"
	I1029 09:09:26.786737  302556 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-017274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:09:26.786849  302556 start.go:125] createHost starting for "" (driver="docker")
	I1029 09:09:26.188355  292184 pod_ready.go:94] pod "kube-proxy-bxthb" is "Ready"
	I1029 09:09:26.188389  292184 pod_ready.go:86] duration metric: took 399.779532ms for pod "kube-proxy-bxthb" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.388706  292184 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.788345  292184 pod_ready.go:94] pod "kube-scheduler-embed-certs-834228" is "Ready"
	I1029 09:09:26.788388  292184 pod_ready.go:86] duration metric: took 399.616885ms for pod "kube-scheduler-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.788403  292184 pod_ready.go:40] duration metric: took 1.605163852s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:09:26.839284  292184 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:09:26.842361  292184 out.go:179] * Done! kubectl is now configured to use "embed-certs-834228" cluster and "default" namespace by default
	W1029 09:09:24.627967  287408 node_ready.go:57] node "no-preload-043790" has "Ready":"False" status (will retry)
	I1029 09:09:25.628113  287408 node_ready.go:49] node "no-preload-043790" is "Ready"
	I1029 09:09:25.628142  287408 node_ready.go:38] duration metric: took 13.503742028s for node "no-preload-043790" to be "Ready" ...
	I1029 09:09:25.628161  287408 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:09:25.628222  287408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:09:25.661909  287408 api_server.go:72] duration metric: took 13.910561121s to wait for apiserver process to appear ...
	I1029 09:09:25.661940  287408 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:09:25.661963  287408 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1029 09:09:25.669411  287408 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1029 09:09:25.670743  287408 api_server.go:141] control plane version: v1.34.1
	I1029 09:09:25.670789  287408 api_server.go:131] duration metric: took 8.839676ms to wait for apiserver health ...
	I1029 09:09:25.670800  287408 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:09:25.676908  287408 system_pods.go:59] 8 kube-system pods found
	I1029 09:09:25.676956  287408 system_pods.go:61] "coredns-66bc5c9577-bgslp" [8f0fcbc0-6872-42e0-a601-21fc1d777bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:09:25.676966  287408 system_pods.go:61] "etcd-no-preload-043790" [8021c438-763d-43d2-a61b-10a533eafb94] Running
	I1029 09:09:25.676974  287408 system_pods.go:61] "kindnet-dlrgv" [f12f7640-1309-4575-aa29-6f262b956f0a] Running
	I1029 09:09:25.676986  287408 system_pods.go:61] "kube-apiserver-no-preload-043790" [2633f749-fbf0-4a24-8fbb-574f6ac7d7a6] Running
	I1029 09:09:25.677028  287408 system_pods.go:61] "kube-controller-manager-no-preload-043790" [2810e859-4eda-4452-aa92-849c03b5f453] Running
	I1029 09:09:25.677033  287408 system_pods.go:61] "kube-proxy-7dc8p" [0ba63a1c-9709-4ebd-8ca2-664740d92a22] Running
	I1029 09:09:25.677038  287408 system_pods.go:61] "kube-scheduler-no-preload-043790" [ce9f47b3-0716-4567-8724-18d1ebc54ced] Running
	I1029 09:09:25.677045  287408 system_pods.go:61] "storage-provisioner" [224fa5f2-7b79-4a88-aff2-e3015c0eb63f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:09:25.677058  287408 system_pods.go:74] duration metric: took 6.25048ms to wait for pod list to return data ...
	I1029 09:09:25.677068  287408 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:09:25.680283  287408 default_sa.go:45] found service account: "default"
	I1029 09:09:25.680308  287408 default_sa.go:55] duration metric: took 3.233907ms for default service account to be created ...
	I1029 09:09:25.680319  287408 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:09:25.683528  287408 system_pods.go:86] 8 kube-system pods found
	I1029 09:09:25.683577  287408 system_pods.go:89] "coredns-66bc5c9577-bgslp" [8f0fcbc0-6872-42e0-a601-21fc1d777bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:09:25.683587  287408 system_pods.go:89] "etcd-no-preload-043790" [8021c438-763d-43d2-a61b-10a533eafb94] Running
	I1029 09:09:25.683594  287408 system_pods.go:89] "kindnet-dlrgv" [f12f7640-1309-4575-aa29-6f262b956f0a] Running
	I1029 09:09:25.683601  287408 system_pods.go:89] "kube-apiserver-no-preload-043790" [2633f749-fbf0-4a24-8fbb-574f6ac7d7a6] Running
	I1029 09:09:25.683608  287408 system_pods.go:89] "kube-controller-manager-no-preload-043790" [2810e859-4eda-4452-aa92-849c03b5f453] Running
	I1029 09:09:25.683613  287408 system_pods.go:89] "kube-proxy-7dc8p" [0ba63a1c-9709-4ebd-8ca2-664740d92a22] Running
	I1029 09:09:25.683618  287408 system_pods.go:89] "kube-scheduler-no-preload-043790" [ce9f47b3-0716-4567-8724-18d1ebc54ced] Running
	I1029 09:09:25.683626  287408 system_pods.go:89] "storage-provisioner" [224fa5f2-7b79-4a88-aff2-e3015c0eb63f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:09:25.683658  287408 retry.go:31] will retry after 280.052496ms: missing components: kube-dns
	I1029 09:09:25.967483  287408 system_pods.go:86] 8 kube-system pods found
	I1029 09:09:25.967520  287408 system_pods.go:89] "coredns-66bc5c9577-bgslp" [8f0fcbc0-6872-42e0-a601-21fc1d777bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:09:25.967527  287408 system_pods.go:89] "etcd-no-preload-043790" [8021c438-763d-43d2-a61b-10a533eafb94] Running
	I1029 09:09:25.967532  287408 system_pods.go:89] "kindnet-dlrgv" [f12f7640-1309-4575-aa29-6f262b956f0a] Running
	I1029 09:09:25.967536  287408 system_pods.go:89] "kube-apiserver-no-preload-043790" [2633f749-fbf0-4a24-8fbb-574f6ac7d7a6] Running
	I1029 09:09:25.967541  287408 system_pods.go:89] "kube-controller-manager-no-preload-043790" [2810e859-4eda-4452-aa92-849c03b5f453] Running
	I1029 09:09:25.967544  287408 system_pods.go:89] "kube-proxy-7dc8p" [0ba63a1c-9709-4ebd-8ca2-664740d92a22] Running
	I1029 09:09:25.967547  287408 system_pods.go:89] "kube-scheduler-no-preload-043790" [ce9f47b3-0716-4567-8724-18d1ebc54ced] Running
	I1029 09:09:25.967552  287408 system_pods.go:89] "storage-provisioner" [224fa5f2-7b79-4a88-aff2-e3015c0eb63f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:09:25.967567  287408 retry.go:31] will retry after 253.86945ms: missing components: kube-dns
	I1029 09:09:26.225761  287408 system_pods.go:86] 8 kube-system pods found
	I1029 09:09:26.225795  287408 system_pods.go:89] "coredns-66bc5c9577-bgslp" [8f0fcbc0-6872-42e0-a601-21fc1d777bc3] Running
	I1029 09:09:26.225803  287408 system_pods.go:89] "etcd-no-preload-043790" [8021c438-763d-43d2-a61b-10a533eafb94] Running
	I1029 09:09:26.225809  287408 system_pods.go:89] "kindnet-dlrgv" [f12f7640-1309-4575-aa29-6f262b956f0a] Running
	I1029 09:09:26.225813  287408 system_pods.go:89] "kube-apiserver-no-preload-043790" [2633f749-fbf0-4a24-8fbb-574f6ac7d7a6] Running
	I1029 09:09:26.225819  287408 system_pods.go:89] "kube-controller-manager-no-preload-043790" [2810e859-4eda-4452-aa92-849c03b5f453] Running
	I1029 09:09:26.225822  287408 system_pods.go:89] "kube-proxy-7dc8p" [0ba63a1c-9709-4ebd-8ca2-664740d92a22] Running
	I1029 09:09:26.225826  287408 system_pods.go:89] "kube-scheduler-no-preload-043790" [ce9f47b3-0716-4567-8724-18d1ebc54ced] Running
	I1029 09:09:26.225829  287408 system_pods.go:89] "storage-provisioner" [224fa5f2-7b79-4a88-aff2-e3015c0eb63f] Running
	I1029 09:09:26.225838  287408 system_pods.go:126] duration metric: took 545.513139ms to wait for k8s-apps to be running ...
	I1029 09:09:26.225847  287408 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:09:26.225910  287408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:09:26.241077  287408 system_svc.go:56] duration metric: took 15.218505ms WaitForService to wait for kubelet
	I1029 09:09:26.241119  287408 kubeadm.go:587] duration metric: took 14.48978974s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:09:26.241143  287408 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:09:26.244412  287408 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 09:09:26.244448  287408 node_conditions.go:123] node cpu capacity is 8
	I1029 09:09:26.244465  287408 node_conditions.go:105] duration metric: took 3.314653ms to run NodePressure ...
	I1029 09:09:26.244479  287408 start.go:242] waiting for startup goroutines ...
	I1029 09:09:26.244488  287408 start.go:247] waiting for cluster config update ...
	I1029 09:09:26.244504  287408 start.go:256] writing updated cluster config ...
	I1029 09:09:26.244877  287408 ssh_runner.go:195] Run: rm -f paused
	I1029 09:09:26.249655  287408 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:09:26.254294  287408 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bgslp" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.259361  287408 pod_ready.go:94] pod "coredns-66bc5c9577-bgslp" is "Ready"
	I1029 09:09:26.259388  287408 pod_ready.go:86] duration metric: took 5.060691ms for pod "coredns-66bc5c9577-bgslp" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.261898  287408 pod_ready.go:83] waiting for pod "etcd-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.266636  287408 pod_ready.go:94] pod "etcd-no-preload-043790" is "Ready"
	I1029 09:09:26.266663  287408 pod_ready.go:86] duration metric: took 4.740634ms for pod "etcd-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.269175  287408 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.273624  287408 pod_ready.go:94] pod "kube-apiserver-no-preload-043790" is "Ready"
	I1029 09:09:26.273649  287408 pod_ready.go:86] duration metric: took 4.450389ms for pod "kube-apiserver-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.275707  287408 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.655203  287408 pod_ready.go:94] pod "kube-controller-manager-no-preload-043790" is "Ready"
	I1029 09:09:26.655236  287408 pod_ready.go:86] duration metric: took 379.505293ms for pod "kube-controller-manager-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:26.855462  287408 pod_ready.go:83] waiting for pod "kube-proxy-7dc8p" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:27.254802  287408 pod_ready.go:94] pod "kube-proxy-7dc8p" is "Ready"
	I1029 09:09:27.254827  287408 pod_ready.go:86] duration metric: took 399.334643ms for pod "kube-proxy-7dc8p" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:27.454125  287408 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:27.853871  287408 pod_ready.go:94] pod "kube-scheduler-no-preload-043790" is "Ready"
	I1029 09:09:27.853895  287408 pod_ready.go:86] duration metric: took 399.7441ms for pod "kube-scheduler-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:09:27.853907  287408 pod_ready.go:40] duration metric: took 1.604212036s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:09:27.904823  287408 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:09:27.909160  287408 out.go:179] * Done! kubectl is now configured to use "no-preload-043790" cluster and "default" namespace by default
	I1029 09:09:26.789011  302556 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1029 09:09:26.789307  302556 start.go:159] libmachine.API.Create for "default-k8s-diff-port-017274" (driver="docker")
	I1029 09:09:26.789366  302556 client.go:173] LocalClient.Create starting
	I1029 09:09:26.789453  302556 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem
	I1029 09:09:26.789496  302556 main.go:143] libmachine: Decoding PEM data...
	I1029 09:09:26.789523  302556 main.go:143] libmachine: Parsing certificate...
	I1029 09:09:26.789596  302556 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem
	I1029 09:09:26.789620  302556 main.go:143] libmachine: Decoding PEM data...
	I1029 09:09:26.789631  302556 main.go:143] libmachine: Parsing certificate...
	I1029 09:09:26.789962  302556 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-017274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1029 09:09:26.810730  302556 cli_runner.go:211] docker network inspect default-k8s-diff-port-017274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1029 09:09:26.810829  302556 network_create.go:284] running [docker network inspect default-k8s-diff-port-017274] to gather additional debugging logs...
	I1029 09:09:26.810853  302556 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-017274
	W1029 09:09:26.830745  302556 cli_runner.go:211] docker network inspect default-k8s-diff-port-017274 returned with exit code 1
	I1029 09:09:26.830786  302556 network_create.go:287] error running [docker network inspect default-k8s-diff-port-017274]: docker network inspect default-k8s-diff-port-017274: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-017274 not found
	I1029 09:09:26.830805  302556 network_create.go:289] output of [docker network inspect default-k8s-diff-port-017274]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-017274 not found
	
	** /stderr **
	I1029 09:09:26.830891  302556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:09:26.851280  302556 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b27c046ec42 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:96:bc:cb:4a:50:f2} reservation:<nil>}
	I1029 09:09:26.852100  302556 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0c15025939eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:79:05:d8:32:73} reservation:<nil>}
	I1029 09:09:26.852947  302556 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5e92a9c19423 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:ec:bb:72:ab:23} reservation:<nil>}
	I1029 09:09:26.853866  302556 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-86d19029abe0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:37:1e:54:39:51} reservation:<nil>}
	I1029 09:09:26.854736  302556 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-e1d4705eea87 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ce:b7:20:c7:ca:ab} reservation:<nil>}
	I1029 09:09:26.855593  302556 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-dcc575c7384e IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:c6:b4:23:94:8e:82} reservation:<nil>}
	I1029 09:09:26.856598  302556 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f1aff0}
	I1029 09:09:26.856627  302556 network_create.go:124] attempt to create docker network default-k8s-diff-port-017274 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1029 09:09:26.856705  302556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-017274 default-k8s-diff-port-017274
	I1029 09:09:26.925107  302556 network_create.go:108] docker network default-k8s-diff-port-017274 192.168.103.0/24 created
	I1029 09:09:26.925158  302556 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-017274" container
	I1029 09:09:26.925221  302556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1029 09:09:26.944359  302556 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-017274 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-017274 --label created_by.minikube.sigs.k8s.io=true
	I1029 09:09:26.966333  302556 oci.go:103] Successfully created a docker volume default-k8s-diff-port-017274
	I1029 09:09:26.966433  302556 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-017274-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-017274 --entrypoint /usr/bin/test -v default-k8s-diff-port-017274:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1029 09:09:27.387755  302556 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-017274
	I1029 09:09:27.387808  302556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:09:27.387839  302556 kic.go:194] Starting extracting preloaded images to volume ...
	I1029 09:09:27.387949  302556 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-017274:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1029 09:09:32.037941  302556 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-017274:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.649937892s)
	I1029 09:09:32.037979  302556 kic.go:203] duration metric: took 4.650135018s to extract preloaded images to volume ...
	W1029 09:09:32.038136  302556 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1029 09:09:32.038191  302556 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1029 09:09:32.038243  302556 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1029 09:09:32.104836  302556 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-017274 --name default-k8s-diff-port-017274 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-017274 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-017274 --network default-k8s-diff-port-017274 --ip 192.168.103.2 --volume default-k8s-diff-port-017274:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1029 09:09:32.419915  302556 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Running}}
	I1029 09:09:32.443819  302556 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Status}}
	I1029 09:09:32.465908  302556 cli_runner.go:164] Run: docker exec default-k8s-diff-port-017274 stat /var/lib/dpkg/alternatives/iptables
	I1029 09:09:32.519017  302556 oci.go:144] the created container "default-k8s-diff-port-017274" has a running status.
	I1029 09:09:32.519062  302556 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa...
	I1029 09:09:33.289375  302556 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1029 09:09:33.326168  302556 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Status}}
	I1029 09:09:33.348415  302556 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1029 09:09:33.348449  302556 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-017274 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1029 09:09:33.394019  302556 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Status}}
	I1029 09:09:33.412382  302556 machine.go:94] provisionDockerMachine start ...
	I1029 09:09:33.412484  302556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:09:33.430827  302556 main.go:143] libmachine: Using SSH client type: native
	I1029 09:09:33.431188  302556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1029 09:09:33.431205  302556 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:09:33.572484  302556 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-017274
	
	I1029 09:09:33.572516  302556 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-017274"
	I1029 09:09:33.572590  302556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:09:33.590673  302556 main.go:143] libmachine: Using SSH client type: native
	I1029 09:09:33.590917  302556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1029 09:09:33.590937  302556 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-017274 && echo "default-k8s-diff-port-017274" | sudo tee /etc/hostname
	I1029 09:09:33.742174  302556 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-017274
	
	I1029 09:09:33.742262  302556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:09:33.760783  302556 main.go:143] libmachine: Using SSH client type: native
	I1029 09:09:33.761006  302556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1029 09:09:33.761033  302556 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-017274' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-017274/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-017274' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:09:33.901913  302556 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:09:33.901949  302556 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-3727/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-3727/.minikube}
	I1029 09:09:33.902000  302556 ubuntu.go:190] setting up certificates
	I1029 09:09:33.902014  302556 provision.go:84] configureAuth start
	I1029 09:09:33.902078  302556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-017274
	I1029 09:09:33.921075  302556 provision.go:143] copyHostCerts
	I1029 09:09:33.921140  302556 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem, removing ...
	I1029 09:09:33.921155  302556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem
	I1029 09:09:33.921251  302556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem (1078 bytes)
	I1029 09:09:33.921381  302556 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem, removing ...
	I1029 09:09:33.921393  302556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem
	I1029 09:09:33.921434  302556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem (1123 bytes)
	I1029 09:09:33.921543  302556 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem, removing ...
	I1029 09:09:33.921555  302556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem
	I1029 09:09:33.921591  302556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem (1679 bytes)
	I1029 09:09:33.921899  302556 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-017274 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-017274 localhost minikube]
	I1029 09:09:34.168061  302556 provision.go:177] copyRemoteCerts
	I1029 09:09:34.168138  302556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:09:34.168183  302556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:09:34.186374  302556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:09:34.288582  302556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1029 09:09:34.310883  302556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1029 09:09:34.330164  302556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1029 09:09:34.349334  302556 provision.go:87] duration metric: took 447.301474ms to configureAuth
	I1029 09:09:34.349367  302556 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:09:34.349590  302556 config.go:182] Loaded profile config "default-k8s-diff-port-017274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:09:34.349734  302556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:09:34.369800  302556 main.go:143] libmachine: Using SSH client type: native
	I1029 09:09:34.370091  302556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1029 09:09:34.370118  302556 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:09:34.642835  302556 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:09:34.642860  302556 machine.go:97] duration metric: took 1.2304579s to provisionDockerMachine
	I1029 09:09:34.642872  302556 client.go:176] duration metric: took 7.85349608s to LocalClient.Create
	I1029 09:09:34.642888  302556 start.go:167] duration metric: took 7.853583828s to libmachine.API.Create "default-k8s-diff-port-017274"
	I1029 09:09:34.642897  302556 start.go:293] postStartSetup for "default-k8s-diff-port-017274" (driver="docker")
	I1029 09:09:34.642909  302556 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:09:34.642970  302556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:09:34.643048  302556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:09:34.663776  302556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:09:34.766052  302556 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:09:34.769598  302556 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:09:34.769629  302556 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:09:34.769641  302556 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/addons for local assets ...
	I1029 09:09:34.769719  302556 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/files for local assets ...
	I1029 09:09:34.769818  302556 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem -> 72182.pem in /etc/ssl/certs
	I1029 09:09:34.769930  302556 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:09:34.777608  302556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:09:34.797539  302556 start.go:296] duration metric: took 154.629649ms for postStartSetup
	I1029 09:09:34.797894  302556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-017274
	I1029 09:09:34.816291  302556 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/config.json ...
	I1029 09:09:34.816600  302556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:09:34.816644  302556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:09:34.835086  302556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:09:34.932215  302556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:09:34.936907  302556 start.go:128] duration metric: took 8.150043824s to createHost
	I1029 09:09:34.936937  302556 start.go:83] releasing machines lock for "default-k8s-diff-port-017274", held for 8.150217199s
	I1029 09:09:34.937034  302556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-017274
	I1029 09:09:34.955486  302556 ssh_runner.go:195] Run: cat /version.json
	I1029 09:09:34.955534  302556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:09:34.955585  302556 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:09:34.955663  302556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:09:34.974867  302556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:09:34.975097  302556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:09:35.071307  302556 ssh_runner.go:195] Run: systemctl --version
	I1029 09:09:35.135510  302556 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:09:35.175856  302556 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:09:35.180597  302556 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:09:35.180690  302556 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:09:35.208562  302556 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1029 09:09:35.208588  302556 start.go:496] detecting cgroup driver to use...
	I1029 09:09:35.208617  302556 detect.go:190] detected "systemd" cgroup driver on host os
	I1029 09:09:35.208655  302556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:09:35.225633  302556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:09:35.239063  302556 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:09:35.239127  302556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:09:35.256353  302556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:09:35.275035  302556 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:09:35.366953  302556 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:09:35.462683  302556 docker.go:234] disabling docker service ...
	I1029 09:09:35.462755  302556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:09:35.482220  302556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:09:35.496656  302556 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:09:35.591582  302556 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:09:35.679271  302556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:09:35.692275  302556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:09:35.706851  302556 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:09:35.706921  302556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:09:35.718116  302556 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1029 09:09:35.718178  302556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:09:35.726941  302556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:09:35.736659  302556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:09:35.746639  302556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:09:35.757134  302556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:09:35.767359  302556 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:09:35.784776  302556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:09:35.795520  302556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:09:35.804067  302556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:09:35.812315  302556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:09:35.900243  302556 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:09:36.021170  302556 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:09:36.021239  302556 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:09:36.025566  302556 start.go:564] Will wait 60s for crictl version
	I1029 09:09:36.025627  302556 ssh_runner.go:195] Run: which crictl
	I1029 09:09:36.029306  302556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:09:36.055428  302556 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:09:36.055508  302556 ssh_runner.go:195] Run: crio --version
	I1029 09:09:36.084261  302556 ssh_runner.go:195] Run: crio --version
	I1029 09:09:36.117064  302556 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:09:36.118309  302556 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-017274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:09:36.137906  302556 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1029 09:09:36.142840  302556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:09:36.153704  302556 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-017274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:09:36.153834  302556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:09:36.153899  302556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:09:36.188615  302556 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:09:36.188644  302556 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:09:36.188713  302556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:09:36.217638  302556 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:09:36.217676  302556 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:09:36.217687  302556 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1029 09:09:36.217856  302556 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-017274 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:09:36.217957  302556 ssh_runner.go:195] Run: crio config
	I1029 09:09:36.266147  302556 cni.go:84] Creating CNI manager for ""
	I1029 09:09:36.266172  302556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:09:36.266190  302556 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:09:36.266211  302556 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-017274 NodeName:default-k8s-diff-port-017274 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:09:36.266334  302556 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-017274"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:09:36.266391  302556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:09:36.276112  302556 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:09:36.276188  302556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:09:36.284430  302556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1029 09:09:36.297557  302556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:09:36.314206  302556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1029 09:09:36.327725  302556 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:09:36.331677  302556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:09:36.343266  302556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:09:36.428376  302556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:09:36.460956  302556 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274 for IP: 192.168.103.2
	I1029 09:09:36.460981  302556 certs.go:195] generating shared ca certs ...
	I1029 09:09:36.461025  302556 certs.go:227] acquiring lock for ca certs: {Name:mk2fcaaead4b0fcf1dc2cfc80d95b3cc12092f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:09:36.461300  302556 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key
	I1029 09:09:36.461369  302556 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key
	I1029 09:09:36.461384  302556 certs.go:257] generating profile certs ...
	I1029 09:09:36.461483  302556 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/client.key
	I1029 09:09:36.461503  302556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/client.crt with IP's: []
	I1029 09:09:36.553159  302556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/client.crt ...
	I1029 09:09:36.553196  302556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/client.crt: {Name:mk581960772eec8e4f655145e5be1beb15db1f00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:09:36.553446  302556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/client.key ...
	I1029 09:09:36.553471  302556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/client.key: {Name:mk1bcaf67a4835298833f2e039b2ad6ae3181036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:09:36.553606  302556 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.key.81f03550
	I1029 09:09:36.553632  302556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.crt.81f03550 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1029 09:09:36.785977  302556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.crt.81f03550 ...
	I1029 09:09:36.786019  302556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.crt.81f03550: {Name:mkeb15deb2357e6148fb10783ffc34ab9e34d52e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:09:36.786219  302556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.key.81f03550 ...
	I1029 09:09:36.786236  302556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.key.81f03550: {Name:mkb9fff0bc7ee2c1d5b6fcb7754d0f29b6c7f01e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:09:36.786346  302556 certs.go:382] copying /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.crt.81f03550 -> /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.crt
	I1029 09:09:36.786474  302556 certs.go:386] copying /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.key.81f03550 -> /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.key
	I1029 09:09:36.786535  302556 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/proxy-client.key
	I1029 09:09:36.786550  302556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/proxy-client.crt with IP's: []
	I1029 09:09:37.091396  302556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/proxy-client.crt ...
	I1029 09:09:37.091426  302556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/proxy-client.crt: {Name:mk8f94112b97af323a7b9eb222879b9d7c12c027 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:09:37.091612  302556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/proxy-client.key ...
	I1029 09:09:37.091629  302556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/proxy-client.key: {Name:mkd6b32c92034df162dbe8e2b8c09068b00e5e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:09:37.091846  302556 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem (1338 bytes)
	W1029 09:09:37.091896  302556 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218_empty.pem, impossibly tiny 0 bytes
	I1029 09:09:37.091911  302556 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem (1675 bytes)
	I1029 09:09:37.091953  302556 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem (1078 bytes)
	I1029 09:09:37.092006  302556 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:09:37.092043  302556 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem (1679 bytes)
	I1029 09:09:37.092100  302556 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:09:37.092727  302556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:09:37.113947  302556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:09:37.138299  302556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:09:37.161376  302556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1029 09:09:37.181694  302556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 09:09:37.203278  302556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:09:37.226916  302556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:09:37.250875  302556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 09:09:37.273563  302556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:09:37.295807  302556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem --> /usr/share/ca-certificates/7218.pem (1338 bytes)
	I1029 09:09:37.316597  302556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /usr/share/ca-certificates/72182.pem (1708 bytes)
	I1029 09:09:37.337703  302556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:09:37.352508  302556 ssh_runner.go:195] Run: openssl version
	I1029 09:09:37.359007  302556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:09:37.367840  302556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:09:37.372309  302556 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:09:37.372360  302556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:09:37.410263  302556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:09:37.419517  302556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7218.pem && ln -fs /usr/share/ca-certificates/7218.pem /etc/ssl/certs/7218.pem"
	I1029 09:09:37.428699  302556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7218.pem
	I1029 09:09:37.432910  302556 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:26 /usr/share/ca-certificates/7218.pem
	I1029 09:09:37.432969  302556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7218.pem
	I1029 09:09:37.476468  302556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7218.pem /etc/ssl/certs/51391683.0"
	I1029 09:09:37.485614  302556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72182.pem && ln -fs /usr/share/ca-certificates/72182.pem /etc/ssl/certs/72182.pem"
	I1029 09:09:37.495396  302556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72182.pem
	I1029 09:09:37.499703  302556 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:26 /usr/share/ca-certificates/72182.pem
	I1029 09:09:37.499762  302556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72182.pem
	I1029 09:09:37.537230  302556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72182.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:09:37.547073  302556 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:09:37.551390  302556 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1029 09:09:37.551448  302556 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-017274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:09:37.551541  302556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:09:37.551601  302556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:09:37.595919  302556 cri.go:89] found id: ""
	I1029 09:09:37.595985  302556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:09:37.605976  302556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1029 09:09:37.616568  302556 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1029 09:09:37.616638  302556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1029 09:09:37.625016  302556 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1029 09:09:37.625036  302556 kubeadm.go:158] found existing configuration files:
	
	I1029 09:09:37.625082  302556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1029 09:09:37.634715  302556 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1029 09:09:37.634765  302556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1029 09:09:37.643177  302556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1029 09:09:37.651669  302556 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1029 09:09:37.651732  302556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1029 09:09:37.662154  302556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1029 09:09:37.671301  302556 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1029 09:09:37.671347  302556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1029 09:09:37.680343  302556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1029 09:09:37.688870  302556 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1029 09:09:37.688921  302556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1029 09:09:37.697688  302556 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1029 09:09:37.739602  302556 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1029 09:09:37.739678  302556 kubeadm.go:319] [preflight] Running pre-flight checks
	I1029 09:09:37.761131  302556 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1029 09:09:37.761215  302556 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1029 09:09:37.761279  302556 kubeadm.go:319] OS: Linux
	I1029 09:09:37.761337  302556 kubeadm.go:319] CGROUPS_CPU: enabled
	I1029 09:09:37.761405  302556 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1029 09:09:37.761495  302556 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1029 09:09:37.761576  302556 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1029 09:09:37.761655  302556 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1029 09:09:37.761732  302556 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1029 09:09:37.761801  302556 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1029 09:09:37.761873  302556 kubeadm.go:319] CGROUPS_IO: enabled
	I1029 09:09:37.823833  302556 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1029 09:09:37.824034  302556 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1029 09:09:37.824167  302556 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1029 09:09:37.832773  302556 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 29 09:09:25 no-preload-043790 crio[767]: time="2025-10-29T09:09:25.556815829Z" level=info msg="Starting container: 2267391820a4ae96b1c52c4e8391d80b72c844041cc7feedaf96b3d001918002" id=7bdc0cc0-150a-4990-b7d6-16f22ba33756 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:09:25 no-preload-043790 crio[767]: time="2025-10-29T09:09:25.559036212Z" level=info msg="Started container" PID=2877 containerID=2267391820a4ae96b1c52c4e8391d80b72c844041cc7feedaf96b3d001918002 description=kube-system/coredns-66bc5c9577-bgslp/coredns id=7bdc0cc0-150a-4990-b7d6-16f22ba33756 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bcfc4e984ff49fffda78945c017a6916ea836d3b8c5a1cc42c651bfd72864cba
	Oct 29 09:09:28 no-preload-043790 crio[767]: time="2025-10-29T09:09:28.403315641Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7854c3de-873f-4958-98a3-48635a191e75 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:09:28 no-preload-043790 crio[767]: time="2025-10-29T09:09:28.4034072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:09:28 no-preload-043790 crio[767]: time="2025-10-29T09:09:28.410392625Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:722e161d31a5ca45dd84989801890e72d6174482bf46fb9c05db0da0df62dde7 UID:52579065-c1ba-441f-8953-b5336db20cc0 NetNS:/var/run/netns/9afc45a3-b1b2-434f-9491-f47ab9c78da8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001333c0}] Aliases:map[]}"
	Oct 29 09:09:28 no-preload-043790 crio[767]: time="2025-10-29T09:09:28.410436906Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 29 09:09:28 no-preload-043790 crio[767]: time="2025-10-29T09:09:28.421785212Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:722e161d31a5ca45dd84989801890e72d6174482bf46fb9c05db0da0df62dde7 UID:52579065-c1ba-441f-8953-b5336db20cc0 NetNS:/var/run/netns/9afc45a3-b1b2-434f-9491-f47ab9c78da8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001333c0}] Aliases:map[]}"
	Oct 29 09:09:28 no-preload-043790 crio[767]: time="2025-10-29T09:09:28.42203923Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 29 09:09:28 no-preload-043790 crio[767]: time="2025-10-29T09:09:28.423277368Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 29 09:09:28 no-preload-043790 crio[767]: time="2025-10-29T09:09:28.424525578Z" level=info msg="Ran pod sandbox 722e161d31a5ca45dd84989801890e72d6174482bf46fb9c05db0da0df62dde7 with infra container: default/busybox/POD" id=7854c3de-873f-4958-98a3-48635a191e75 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:09:28 no-preload-043790 crio[767]: time="2025-10-29T09:09:28.425903278Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a9715122-0f16-4b7a-a9e1-611ddc065c85 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:09:28 no-preload-043790 crio[767]: time="2025-10-29T09:09:28.426052017Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a9715122-0f16-4b7a-a9e1-611ddc065c85 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:09:28 no-preload-043790 crio[767]: time="2025-10-29T09:09:28.426088487Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a9715122-0f16-4b7a-a9e1-611ddc065c85 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:09:28 no-preload-043790 crio[767]: time="2025-10-29T09:09:28.426701979Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1dea656a-ba7b-4d0c-8175-eaf5a76878fb name=/runtime.v1.ImageService/PullImage
	Oct 29 09:09:28 no-preload-043790 crio[767]: time="2025-10-29T09:09:28.428214168Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 29 09:09:29 no-preload-043790 crio[767]: time="2025-10-29T09:09:29.157351281Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=1dea656a-ba7b-4d0c-8175-eaf5a76878fb name=/runtime.v1.ImageService/PullImage
	Oct 29 09:09:29 no-preload-043790 crio[767]: time="2025-10-29T09:09:29.157936344Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3816e564-2979-4fb6-b1a3-491b1d1b5f21 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:09:29 no-preload-043790 crio[767]: time="2025-10-29T09:09:29.159377308Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0807d2fd-3336-498c-bdd9-8777ac52ba75 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:09:29 no-preload-043790 crio[767]: time="2025-10-29T09:09:29.162881754Z" level=info msg="Creating container: default/busybox/busybox" id=610c6ff1-9f68-4995-810d-bf8c1f7e1e98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:09:29 no-preload-043790 crio[767]: time="2025-10-29T09:09:29.163048472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:09:29 no-preload-043790 crio[767]: time="2025-10-29T09:09:29.167407838Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:09:29 no-preload-043790 crio[767]: time="2025-10-29T09:09:29.167819791Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:09:29 no-preload-043790 crio[767]: time="2025-10-29T09:09:29.196950661Z" level=info msg="Created container 1bb600dab8ce781333f809c17be43bfa536400b32de62ca7df3fb414f41238e4: default/busybox/busybox" id=610c6ff1-9f68-4995-810d-bf8c1f7e1e98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:09:29 no-preload-043790 crio[767]: time="2025-10-29T09:09:29.197729658Z" level=info msg="Starting container: 1bb600dab8ce781333f809c17be43bfa536400b32de62ca7df3fb414f41238e4" id=d5383221-f585-4528-becc-3c09154acc5d name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:09:29 no-preload-043790 crio[767]: time="2025-10-29T09:09:29.199683505Z" level=info msg="Started container" PID=2952 containerID=1bb600dab8ce781333f809c17be43bfa536400b32de62ca7df3fb414f41238e4 description=default/busybox/busybox id=d5383221-f585-4528-becc-3c09154acc5d name=/runtime.v1.RuntimeService/StartContainer sandboxID=722e161d31a5ca45dd84989801890e72d6174482bf46fb9c05db0da0df62dde7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1bb600dab8ce7       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   722e161d31a5c       busybox                                     default
	2267391820a4a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   bcfc4e984ff49       coredns-66bc5c9577-bgslp                    kube-system
	8394dc05d9de2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   0819a267264d6       storage-provisioner                         kube-system
	31ecfb04e340c       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   90a3e7630b488       kindnet-dlrgv                               kube-system
	89377c726bf6c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      26 seconds ago      Running             kube-proxy                0                   b1f9ed058c76b       kube-proxy-7dc8p                            kube-system
	35f40abfb1f04       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      37 seconds ago      Running             kube-apiserver            0                   04716d2efee3c       kube-apiserver-no-preload-043790            kube-system
	97064513bce2c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      37 seconds ago      Running             kube-scheduler            0                   9c9d3caf46507       kube-scheduler-no-preload-043790            kube-system
	2a90a207e87d4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      37 seconds ago      Running             kube-controller-manager   0                   75081a5a7f810       kube-controller-manager-no-preload-043790   kube-system
	b7422a7ceb821       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      37 seconds ago      Running             etcd                      0                   6ffcf18149c58       etcd-no-preload-043790                      kube-system
	
	
	==> coredns [2267391820a4ae96b1c52c4e8391d80b72c844041cc7feedaf96b3d001918002] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39040 - 8384 "HINFO IN 8217473496543809311.6231478674808381026. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.057805306s
	
	
	==> describe nodes <==
	Name:               no-preload-043790
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-043790
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=no-preload-043790
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_09_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:09:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-043790
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:09:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:09:37 +0000   Wed, 29 Oct 2025 09:09:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:09:37 +0000   Wed, 29 Oct 2025 09:09:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:09:37 +0000   Wed, 29 Oct 2025 09:09:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:09:37 +0000   Wed, 29 Oct 2025 09:09:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-043790
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                75fc084d-43fe-4f22-be75-228a0a9d261e
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-bgslp                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-043790                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-dlrgv                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-043790             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-043790    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-7dc8p                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-043790             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node no-preload-043790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node no-preload-043790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node no-preload-043790 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node no-preload-043790 event: Registered Node no-preload-043790 in Controller
	  Normal  NodeReady                13s   kubelet          Node no-preload-043790 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[  +7.860471] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea d1 bc 55 fa d3 08 06
	[  +0.057230] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[  +7.379065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 86 de 12 5c b8 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[Oct29 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 3e ab f0 10 3c 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 16 ea 00 8c 1d 08 06
	[  +4.650960] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 a3 db 56 0e bd 08 06
	[  +0.000357] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[ +10.158654] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	[Oct29 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 62 44 12 a2 6e 08 06
	[  +0.000472] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	
	
	==> etcd [b7422a7ceb82174bb8873555a4b5ce6b95cd5304cd6b7b7977a07cd1ff3d8c42] <==
	{"level":"warn","ts":"2025-10-29T09:09:02.543535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.555519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.563078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.569956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.576341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.583064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.589622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.596042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.603751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.612145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.619061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.625626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.632056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.638774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.645745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.652661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.659186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.666875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.673469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.680914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.688614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.700332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.706963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.715354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:02.784805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41096","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:09:38 up 52 min,  0 user,  load average: 6.18, 4.06, 2.47
	Linux no-preload-043790 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [31ecfb04e340c7da6e9e49dfdb1c921e4e73319abdd80bca5e0354ab09b1874b] <==
	I1029 09:09:14.690634       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:09:14.690904       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1029 09:09:14.691097       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:09:14.691112       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:09:14.691122       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:09:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:09:14.892698       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:09:14.892771       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:09:14.892784       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:09:14.893293       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:09:15.288259       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:09:15.288460       1 metrics.go:72] Registering metrics
	I1029 09:09:15.288860       1 controller.go:711] "Syncing nftables rules"
	I1029 09:09:24.892886       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1029 09:09:24.892952       1 main.go:301] handling current node
	I1029 09:09:34.896194       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1029 09:09:34.896225       1 main.go:301] handling current node
	
	
	==> kube-apiserver [35f40abfb1f04b8ed4d7d5866836c07357b91086d516c6c30f99f5d480990ea4] <==
	E1029 09:09:03.416313       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1029 09:09:03.448060       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1029 09:09:03.496294       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:09:03.500254       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:09:03.500362       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1029 09:09:03.505649       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:09:03.506305       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:09:03.619980       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:09:04.298128       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1029 09:09:04.303770       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1029 09:09:04.303789       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:09:05.152988       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:09:05.209773       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:09:05.306855       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1029 09:09:05.318957       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1029 09:09:05.320975       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:09:05.329783       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:09:05.353614       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:09:06.122752       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:09:06.135841       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1029 09:09:06.148389       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 09:09:11.051585       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:09:11.056982       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:09:11.099384       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1029 09:09:11.151400       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2a90a207e87d4f009346265d03e8c800a17e3766b9c3662a678d828d0c822ded] <==
	I1029 09:09:10.336983       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:09:10.345763       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1029 09:09:10.345890       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:09:10.347096       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 09:09:10.347126       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1029 09:09:10.347215       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:09:10.347265       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1029 09:09:10.347227       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 09:09:10.347371       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1029 09:09:10.347383       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1029 09:09:10.347414       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:09:10.347429       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 09:09:10.347443       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:09:10.347543       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 09:09:10.347625       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 09:09:10.347724       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-043790"
	I1029 09:09:10.347739       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 09:09:10.347787       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1029 09:09:10.350895       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:09:10.351305       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 09:09:10.352556       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:09:10.355881       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:09:10.356070       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1029 09:09:10.371590       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:09:25.349969       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [89377c726bf6cf6233dc43679a482bdde63df54c0da6fb01df5f9e635ba304e9] <==
	I1029 09:09:12.170100       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:09:12.248647       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:09:12.349709       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:09:12.349757       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1029 09:09:12.349845       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:09:12.372184       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:09:12.372252       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:09:12.378771       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:09:12.379248       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:09:12.379274       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:09:12.381150       1 config.go:200] "Starting service config controller"
	I1029 09:09:12.381175       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:09:12.381170       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:09:12.381194       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:09:12.381223       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:09:12.381252       1 config.go:309] "Starting node config controller"
	I1029 09:09:12.381260       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:09:12.381247       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:09:12.381267       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:09:12.481351       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:09:12.481368       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:09:12.481414       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [97064513bce2c52401b58f2a4bc8a7743da07126fde357089b4b290545aa41f5] <==
	E1029 09:09:03.397165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 09:09:03.397833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:09:03.398020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 09:09:03.398016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 09:09:03.398177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 09:09:03.399671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 09:09:03.399804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 09:09:03.399804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:09:03.400549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 09:09:03.401269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 09:09:04.253250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:09:04.255701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 09:09:04.367212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1029 09:09:04.386180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 09:09:04.391620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 09:09:04.395969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 09:09:04.424207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 09:09:04.426696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 09:09:04.466620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 09:09:04.476856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:09:04.585459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 09:09:04.621284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 09:09:04.716885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 09:09:04.721160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1029 09:09:06.390309       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:09:11 no-preload-043790 kubelet[2263]: I1029 09:09:11.168102    2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f12f7640-1309-4575-aa29-6f262b956f0a-lib-modules\") pod \"kindnet-dlrgv\" (UID: \"f12f7640-1309-4575-aa29-6f262b956f0a\") " pod="kube-system/kindnet-dlrgv"
	Oct 29 09:09:11 no-preload-043790 kubelet[2263]: I1029 09:09:11.168166    2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpb58\" (UniqueName: \"kubernetes.io/projected/f12f7640-1309-4575-aa29-6f262b956f0a-kube-api-access-lpb58\") pod \"kindnet-dlrgv\" (UID: \"f12f7640-1309-4575-aa29-6f262b956f0a\") " pod="kube-system/kindnet-dlrgv"
	Oct 29 09:09:11 no-preload-043790 kubelet[2263]: I1029 09:09:11.168195    2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ba63a1c-9709-4ebd-8ca2-664740d92a22-xtables-lock\") pod \"kube-proxy-7dc8p\" (UID: \"0ba63a1c-9709-4ebd-8ca2-664740d92a22\") " pod="kube-system/kube-proxy-7dc8p"
	Oct 29 09:09:11 no-preload-043790 kubelet[2263]: I1029 09:09:11.168216    2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ba63a1c-9709-4ebd-8ca2-664740d92a22-lib-modules\") pod \"kube-proxy-7dc8p\" (UID: \"0ba63a1c-9709-4ebd-8ca2-664740d92a22\") " pod="kube-system/kube-proxy-7dc8p"
	Oct 29 09:09:11 no-preload-043790 kubelet[2263]: I1029 09:09:11.168237    2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdhgx\" (UniqueName: \"kubernetes.io/projected/0ba63a1c-9709-4ebd-8ca2-664740d92a22-kube-api-access-pdhgx\") pod \"kube-proxy-7dc8p\" (UID: \"0ba63a1c-9709-4ebd-8ca2-664740d92a22\") " pod="kube-system/kube-proxy-7dc8p"
	Oct 29 09:09:11 no-preload-043790 kubelet[2263]: I1029 09:09:11.168263    2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f12f7640-1309-4575-aa29-6f262b956f0a-cni-cfg\") pod \"kindnet-dlrgv\" (UID: \"f12f7640-1309-4575-aa29-6f262b956f0a\") " pod="kube-system/kindnet-dlrgv"
	Oct 29 09:09:11 no-preload-043790 kubelet[2263]: I1029 09:09:11.168283    2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f12f7640-1309-4575-aa29-6f262b956f0a-xtables-lock\") pod \"kindnet-dlrgv\" (UID: \"f12f7640-1309-4575-aa29-6f262b956f0a\") " pod="kube-system/kindnet-dlrgv"
	Oct 29 09:09:11 no-preload-043790 kubelet[2263]: I1029 09:09:11.168302    2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0ba63a1c-9709-4ebd-8ca2-664740d92a22-kube-proxy\") pod \"kube-proxy-7dc8p\" (UID: \"0ba63a1c-9709-4ebd-8ca2-664740d92a22\") " pod="kube-system/kube-proxy-7dc8p"
	Oct 29 09:09:11 no-preload-043790 kubelet[2263]: E1029 09:09:11.276149    2263 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 29 09:09:11 no-preload-043790 kubelet[2263]: E1029 09:09:11.276204    2263 projected.go:196] Error preparing data for projected volume kube-api-access-lpb58 for pod kube-system/kindnet-dlrgv: configmap "kube-root-ca.crt" not found
	Oct 29 09:09:11 no-preload-043790 kubelet[2263]: E1029 09:09:11.276155    2263 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 29 09:09:11 no-preload-043790 kubelet[2263]: E1029 09:09:11.276312    2263 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f12f7640-1309-4575-aa29-6f262b956f0a-kube-api-access-lpb58 podName:f12f7640-1309-4575-aa29-6f262b956f0a nodeName:}" failed. No retries permitted until 2025-10-29 09:09:11.776281303 +0000 UTC m=+5.864401163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lpb58" (UniqueName: "kubernetes.io/projected/f12f7640-1309-4575-aa29-6f262b956f0a-kube-api-access-lpb58") pod "kindnet-dlrgv" (UID: "f12f7640-1309-4575-aa29-6f262b956f0a") : configmap "kube-root-ca.crt" not found
	Oct 29 09:09:11 no-preload-043790 kubelet[2263]: E1029 09:09:11.276326    2263 projected.go:196] Error preparing data for projected volume kube-api-access-pdhgx for pod kube-system/kube-proxy-7dc8p: configmap "kube-root-ca.crt" not found
	Oct 29 09:09:11 no-preload-043790 kubelet[2263]: E1029 09:09:11.276432    2263 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ba63a1c-9709-4ebd-8ca2-664740d92a22-kube-api-access-pdhgx podName:0ba63a1c-9709-4ebd-8ca2-664740d92a22 nodeName:}" failed. No retries permitted until 2025-10-29 09:09:11.776407324 +0000 UTC m=+5.864527178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pdhgx" (UniqueName: "kubernetes.io/projected/0ba63a1c-9709-4ebd-8ca2-664740d92a22-kube-api-access-pdhgx") pod "kube-proxy-7dc8p" (UID: "0ba63a1c-9709-4ebd-8ca2-664740d92a22") : configmap "kube-root-ca.crt" not found
	Oct 29 09:09:12 no-preload-043790 kubelet[2263]: I1029 09:09:12.153723    2263 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7dc8p" podStartSLOduration=1.153699438 podStartE2EDuration="1.153699438s" podCreationTimestamp="2025-10-29 09:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:09:12.153399024 +0000 UTC m=+6.241518890" watchObservedRunningTime="2025-10-29 09:09:12.153699438 +0000 UTC m=+6.241819301"
	Oct 29 09:09:15 no-preload-043790 kubelet[2263]: I1029 09:09:15.159944    2263 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dlrgv" podStartSLOduration=1.821038745 podStartE2EDuration="4.159920089s" podCreationTimestamp="2025-10-29 09:09:11 +0000 UTC" firstStartedPulling="2025-10-29 09:09:12.050103369 +0000 UTC m=+6.138223232" lastFinishedPulling="2025-10-29 09:09:14.388984715 +0000 UTC m=+8.477104576" observedRunningTime="2025-10-29 09:09:15.159822205 +0000 UTC m=+9.247942069" watchObservedRunningTime="2025-10-29 09:09:15.159920089 +0000 UTC m=+9.248039965"
	Oct 29 09:09:25 no-preload-043790 kubelet[2263]: I1029 09:09:25.166377    2263 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 29 09:09:25 no-preload-043790 kubelet[2263]: I1029 09:09:25.275117    2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/224fa5f2-7b79-4a88-aff2-e3015c0eb63f-tmp\") pod \"storage-provisioner\" (UID: \"224fa5f2-7b79-4a88-aff2-e3015c0eb63f\") " pod="kube-system/storage-provisioner"
	Oct 29 09:09:25 no-preload-043790 kubelet[2263]: I1029 09:09:25.275174    2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsk8m\" (UniqueName: \"kubernetes.io/projected/8f0fcbc0-6872-42e0-a601-21fc1d777bc3-kube-api-access-bsk8m\") pod \"coredns-66bc5c9577-bgslp\" (UID: \"8f0fcbc0-6872-42e0-a601-21fc1d777bc3\") " pod="kube-system/coredns-66bc5c9577-bgslp"
	Oct 29 09:09:25 no-preload-043790 kubelet[2263]: I1029 09:09:25.275210    2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f0fcbc0-6872-42e0-a601-21fc1d777bc3-config-volume\") pod \"coredns-66bc5c9577-bgslp\" (UID: \"8f0fcbc0-6872-42e0-a601-21fc1d777bc3\") " pod="kube-system/coredns-66bc5c9577-bgslp"
	Oct 29 09:09:25 no-preload-043790 kubelet[2263]: I1029 09:09:25.275241    2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph4c4\" (UniqueName: \"kubernetes.io/projected/224fa5f2-7b79-4a88-aff2-e3015c0eb63f-kube-api-access-ph4c4\") pod \"storage-provisioner\" (UID: \"224fa5f2-7b79-4a88-aff2-e3015c0eb63f\") " pod="kube-system/storage-provisioner"
	Oct 29 09:09:26 no-preload-043790 kubelet[2263]: I1029 09:09:26.208338    2263 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bgslp" podStartSLOduration=15.208311976 podStartE2EDuration="15.208311976s" podCreationTimestamp="2025-10-29 09:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:09:26.193926096 +0000 UTC m=+20.282045970" watchObservedRunningTime="2025-10-29 09:09:26.208311976 +0000 UTC m=+20.296431839"
	Oct 29 09:09:28 no-preload-043790 kubelet[2263]: I1029 09:09:28.093599    2263 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.093565975 podStartE2EDuration="16.093565975s" podCreationTimestamp="2025-10-29 09:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:09:26.223715403 +0000 UTC m=+20.311835266" watchObservedRunningTime="2025-10-29 09:09:28.093565975 +0000 UTC m=+22.181685839"
	Oct 29 09:09:28 no-preload-043790 kubelet[2263]: I1029 09:09:28.195952    2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rmlf\" (UniqueName: \"kubernetes.io/projected/52579065-c1ba-441f-8953-b5336db20cc0-kube-api-access-8rmlf\") pod \"busybox\" (UID: \"52579065-c1ba-441f-8953-b5336db20cc0\") " pod="default/busybox"
	Oct 29 09:09:30 no-preload-043790 kubelet[2263]: I1029 09:09:30.259892    2263 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.527400965 podStartE2EDuration="2.259867478s" podCreationTimestamp="2025-10-29 09:09:28 +0000 UTC" firstStartedPulling="2025-10-29 09:09:28.426316458 +0000 UTC m=+22.514436303" lastFinishedPulling="2025-10-29 09:09:29.158782958 +0000 UTC m=+23.246902816" observedRunningTime="2025-10-29 09:09:30.259646996 +0000 UTC m=+24.347766860" watchObservedRunningTime="2025-10-29 09:09:30.259867478 +0000 UTC m=+24.347987342"
	
	
	==> storage-provisioner [8394dc05d9de2e14e5404ad5b6c7d85f95c5204bd738f113d9b9b4798d1d1bdc] <==
	I1029 09:09:25.564696       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:09:25.573376       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:09:25.573434       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:09:25.576878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:25.582436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:09:25.582611       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:09:25.582783       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2f909cb5-1c5e-4bfe-af9b-4b8cebee1396", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-043790_b1afb083-4f54-4086-b6d3-a44bd30c7905 became leader
	I1029 09:09:25.582865       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-043790_b1afb083-4f54-4086-b6d3-a44bd30c7905!
	W1029 09:09:25.585970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:25.591018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:09:25.683524       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-043790_b1afb083-4f54-4086-b6d3-a44bd30c7905!
	W1029 09:09:27.595170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:27.601424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:29.605413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:29.610191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:31.613583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:31.634966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:33.638448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:33.643771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:35.647466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:35.651259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:37.654765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:09:37.659425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-043790 -n no-preload-043790
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-043790 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-017274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-017274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (341.596192ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:14Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-017274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-017274 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-017274 describe deploy/metrics-server -n kube-system: exit status 1 (98.638513ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-017274 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-017274
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-017274:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb",
	        "Created": "2025-10-29T09:09:32.123718192Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 303729,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:09:32.161036158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb/hostname",
	        "HostsPath": "/var/lib/docker/containers/7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb/hosts",
	        "LogPath": "/var/lib/docker/containers/7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb/7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb-json.log",
	        "Name": "/default-k8s-diff-port-017274",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-017274:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-017274",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb",
	                "LowerDir": "/var/lib/docker/overlay2/117a7a2ef77d077fb877fd0c4a60a9815c28a651245a5dc97bd62489d2fb82c5-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/117a7a2ef77d077fb877fd0c4a60a9815c28a651245a5dc97bd62489d2fb82c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/117a7a2ef77d077fb877fd0c4a60a9815c28a651245a5dc97bd62489d2fb82c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/117a7a2ef77d077fb877fd0c4a60a9815c28a651245a5dc97bd62489d2fb82c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-017274",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-017274/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-017274",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-017274",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-017274",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b7b320f06d67adffbd345cdbd58ee60426da422fcfe2fbf67b9f4b57f246ad61",
	            "SandboxKey": "/var/run/docker/netns/b7b320f06d67",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-017274": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:ac:89:ad:82:b9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3eee94d37a532f968084ecba10a40919f575531a63b06a3b1433848fa7502a53",
	                    "EndpointID": "2cc6d9726c3383a09cf9f8c7cc58f163be37e23676dd2d7ed3e19eeacf7449dd",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-017274",
	                        "7cabc8999167"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-017274 -n default-k8s-diff-port-017274
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-017274 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-017274 logs -n 25: (1.522655703s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-240549 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-240549 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo containerd config dump                                                                                                                                                                                                  │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo crio config                                                                                                                                                                                                             │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ delete  │ -p bridge-240549                                                                                                                                                                                                                              │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ delete  │ -p disable-driver-mounts-318335                                                                                                                                                                                                               │ disable-driver-mounts-318335 │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-096492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p old-k8s-version-096492 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ addons  │ enable metrics-server -p embed-certs-834228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-043790 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p embed-certs-834228 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ stop    │ -p no-preload-043790 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-096492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p old-k8s-version-096492 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-834228 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p embed-certs-834228 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-043790 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p no-preload-043790 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-017274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:09:56
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:09:56.151665  310655 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:09:56.151935  310655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:09:56.151946  310655 out.go:374] Setting ErrFile to fd 2...
	I1029 09:09:56.151953  310655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:09:56.152289  310655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:09:56.152847  310655 out.go:368] Setting JSON to false
	I1029 09:09:56.154328  310655 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3144,"bootTime":1761725852,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:09:56.154442  310655 start.go:143] virtualization: kvm guest
	I1029 09:09:56.156308  310655 out.go:179] * [no-preload-043790] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:09:54.646281  308587 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:09:54.646311  308587 machine.go:97] duration metric: took 4.698411446s to provisionDockerMachine
	I1029 09:09:54.646325  308587 start.go:293] postStartSetup for "old-k8s-version-096492" (driver="docker")
	I1029 09:09:54.646337  308587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:09:54.646389  308587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:09:54.646425  308587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-096492
	I1029 09:09:54.665135  308587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/old-k8s-version-096492/id_rsa Username:docker}
	I1029 09:09:54.765506  308587 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:09:54.769234  308587 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:09:54.769262  308587 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:09:54.769276  308587 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/addons for local assets ...
	I1029 09:09:54.769337  308587 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/files for local assets ...
	I1029 09:09:54.769454  308587 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem -> 72182.pem in /etc/ssl/certs
	I1029 09:09:54.769580  308587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:09:54.777383  308587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:09:54.794978  308587 start.go:296] duration metric: took 148.639577ms for postStartSetup
	I1029 09:09:54.795078  308587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:09:54.795115  308587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-096492
	I1029 09:09:54.812948  308587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/old-k8s-version-096492/id_rsa Username:docker}
	I1029 09:09:54.910551  308587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:09:54.915606  308587 fix.go:56] duration metric: took 5.269444535s for fixHost
	I1029 09:09:54.915642  308587 start.go:83] releasing machines lock for "old-k8s-version-096492", held for 5.269498751s
	I1029 09:09:54.915706  308587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-096492
	I1029 09:09:54.933966  308587 ssh_runner.go:195] Run: cat /version.json
	I1029 09:09:54.934034  308587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-096492
	I1029 09:09:54.934047  308587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:09:54.934104  308587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-096492
	I1029 09:09:54.954083  308587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/old-k8s-version-096492/id_rsa Username:docker}
	I1029 09:09:54.954453  308587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/old-k8s-version-096492/id_rsa Username:docker}
	I1029 09:09:55.052357  308587 ssh_runner.go:195] Run: systemctl --version
	I1029 09:09:55.110191  308587 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:09:55.145614  308587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:09:55.150425  308587 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:09:55.150495  308587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:09:55.158769  308587 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:09:55.158795  308587 start.go:496] detecting cgroup driver to use...
	I1029 09:09:55.158824  308587 detect.go:190] detected "systemd" cgroup driver on host os
	I1029 09:09:55.158872  308587 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:09:55.174801  308587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:09:55.188400  308587 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:09:55.188487  308587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:09:55.203099  308587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:09:55.215764  308587 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:09:55.300251  308587 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:09:55.388801  308587 docker.go:234] disabling docker service ...
	I1029 09:09:55.388871  308587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:09:55.406586  308587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:09:55.421410  308587 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:09:55.525578  308587 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:09:55.624788  308587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:09:55.638894  308587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:09:55.655254  308587 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1029 09:09:55.655315  308587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:09:55.664693  308587 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1029 09:09:55.664749  308587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:09:55.673763  308587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:09:55.683005  308587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:09:55.692496  308587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:09:55.703060  308587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:09:55.733207  308587 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:09:55.747582  308587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:09:55.763273  308587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:09:55.774668  308587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:09:55.783496  308587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:09:55.905748  308587 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:09:56.045545  308587 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:09:56.045615  308587 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:09:56.050229  308587 start.go:564] Will wait 60s for crictl version
	I1029 09:09:56.050288  308587 ssh_runner.go:195] Run: which crictl
	I1029 09:09:56.054356  308587 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:09:56.093076  308587 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:09:56.093167  308587 ssh_runner.go:195] Run: crio --version
	I1029 09:09:56.125190  308587 ssh_runner.go:195] Run: crio --version
	I1029 09:09:56.157682  310655 notify.go:221] Checking for updates...
	I1029 09:09:56.159142  310655 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:09:56.159138  308587 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1029 09:09:56.160387  310655 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:09:56.162321  310655 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:09:56.163546  310655 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:09:56.164640  310655 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:09:56.165817  310655 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:09:56.167596  310655 config.go:182] Loaded profile config "no-preload-043790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:09:56.168305  310655 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:09:56.195850  310655 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:09:56.195923  310655 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:09:56.267871  310655 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:80 SystemTime:2025-10-29 09:09:56.254566729 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:09:56.268024  310655 docker.go:319] overlay module found
	I1029 09:09:56.269839  310655 out.go:179] * Using the docker driver based on existing profile
	I1029 09:09:56.271026  310655 start.go:309] selected driver: docker
	I1029 09:09:56.271048  310655 start.go:930] validating driver "docker" against &{Name:no-preload-043790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-043790 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:09:56.271149  310655 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:09:56.271683  310655 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:09:56.337038  310655 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-10-29 09:09:56.326070028 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:09:56.337381  310655 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:09:56.337423  310655 cni.go:84] Creating CNI manager for ""
	I1029 09:09:56.337492  310655 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:09:56.337548  310655 start.go:353] cluster config:
	{Name:no-preload-043790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-043790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:09:56.340213  310655 out.go:179] * Starting "no-preload-043790" primary control-plane node in "no-preload-043790" cluster
	I1029 09:09:56.341208  310655 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:09:56.342246  310655 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:09:56.343625  310655 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:09:56.343655  310655 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:09:56.343770  310655 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/no-preload-043790/config.json ...
	I1029 09:09:56.343891  310655 cache.go:107] acquiring lock: {Name:mk3012e1989873d2173803ab825c6a50f627f904 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:09:56.343899  310655 cache.go:107] acquiring lock: {Name:mka0b3472290d2761db2951b98c5e317abae8b5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:09:56.343967  310655 cache.go:107] acquiring lock: {Name:mkcedf4b011ce3a207ae33112ef9b9f16eaf002a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:09:56.344041  310655 cache.go:115] /home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1029 09:09:56.344040  310655 cache.go:115] /home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1029 09:09:56.343899  310655 cache.go:107] acquiring lock: {Name:mka999b0130fbf2e7775bf235cea08b428909c98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:09:56.344058  310655 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 85.098µs
	I1029 09:09:56.344072  310655 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1029 09:09:56.344056  310655 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 179.486µs
	I1029 09:09:56.344055  310655 cache.go:107] acquiring lock: {Name:mkded420c4ad22965962f37afdd73c7c3f306037 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:09:56.344099  310655 cache.go:115] /home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1029 09:09:56.344093  310655 cache.go:107] acquiring lock: {Name:mkc80aa5e9a8cdb1e5a58f25af16325ded0d05ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:09:56.344080  310655 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1029 09:09:56.344112  310655 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 226.124µs
	I1029 09:09:56.344122  310655 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1029 09:09:56.344126  310655 cache.go:115] /home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1029 09:09:56.344130  310655 cache.go:115] /home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1029 09:09:56.344135  310655 cache.go:115] /home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1029 09:09:56.344135  310655 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 255.715µs
	I1029 09:09:56.344137  310655 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 46.257µs
	I1029 09:09:56.344145  310655 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1029 09:09:56.344146  310655 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1029 09:09:56.344145  310655 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 159.092µs
	I1029 09:09:56.344161  310655 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1029 09:09:56.344207  310655 cache.go:107] acquiring lock: {Name:mkfad61b7357bc18ad45a8845f24765a6744e5ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:09:56.344241  310655 cache.go:107] acquiring lock: {Name:mk746037b0ed1507cedd1550aab329e718091350 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:09:56.344360  310655 cache.go:115] /home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1029 09:09:56.344373  310655 cache.go:115] /home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1029 09:09:56.344378  310655 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 224.453µs
	I1029 09:09:56.344393  310655 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1029 09:09:56.344392  310655 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 426.498µs
	I1029 09:09:56.344412  310655 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21800-3727/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1029 09:09:56.344422  310655 cache.go:87] Successfully saved all images to host disk.
	I1029 09:09:56.373117  310655 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:09:56.373138  310655 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:09:56.373155  310655 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:09:56.373179  310655 start.go:360] acquireMachinesLock for no-preload-043790: {Name:mkfa3e919f9bd5bb3e1b2eb1ab6e72697efa9d66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:09:56.373246  310655 start.go:364] duration metric: took 43.093µs to acquireMachinesLock for "no-preload-043790"
	I1029 09:09:56.373267  310655 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:09:56.373272  310655 fix.go:54] fixHost starting: 
	I1029 09:09:56.373507  310655 cli_runner.go:164] Run: docker container inspect no-preload-043790 --format={{.State.Status}}
	I1029 09:09:56.394986  310655 fix.go:112] recreateIfNeeded on no-preload-043790: state=Stopped err=<nil>
	W1029 09:09:56.395053  310655 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 09:09:52.518070  302556 addons.go:515] duration metric: took 459.536075ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1029 09:09:52.828087  302556 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-017274" context rescaled to 1 replicas
	W1029 09:09:54.327788  302556 node_ready.go:57] node "default-k8s-diff-port-017274" has "Ready":"False" status (will retry)
	W1029 09:09:56.328807  302556 node_ready.go:57] node "default-k8s-diff-port-017274" has "Ready":"False" status (will retry)
	I1029 09:09:56.160350  308587 cli_runner.go:164] Run: docker network inspect old-k8s-version-096492 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:09:56.181925  308587 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1029 09:09:56.187194  308587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:09:56.199321  308587 kubeadm.go:884] updating cluster {Name:old-k8s-version-096492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-096492 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:09:56.199447  308587 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1029 09:09:56.199497  308587 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:09:56.246532  308587 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:09:56.246774  308587 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:09:56.246871  308587 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:09:56.280016  308587 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:09:56.280039  308587 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:09:56.280047  308587 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1029 09:09:56.280171  308587 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-096492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-096492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:09:56.280255  308587 ssh_runner.go:195] Run: crio config
	I1029 09:09:56.341294  308587 cni.go:84] Creating CNI manager for ""
	I1029 09:09:56.341320  308587 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:09:56.341337  308587 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:09:56.341370  308587 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-096492 NodeName:old-k8s-version-096492 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:09:56.341549  308587 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-096492"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:09:56.341609  308587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1029 09:09:56.354235  308587 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:09:56.354306  308587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:09:56.364945  308587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1029 09:09:56.379595  308587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:09:56.393619  308587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1029 09:09:56.409228  308587 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:09:56.413969  308587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:09:56.429187  308587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:09:56.546276  308587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:09:56.571890  308587 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/old-k8s-version-096492 for IP: 192.168.85.2
	I1029 09:09:56.571914  308587 certs.go:195] generating shared ca certs ...
	I1029 09:09:56.571933  308587 certs.go:227] acquiring lock for ca certs: {Name:mk2fcaaead4b0fcf1dc2cfc80d95b3cc12092f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:09:56.572243  308587 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key
	I1029 09:09:56.572304  308587 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key
	I1029 09:09:56.572317  308587 certs.go:257] generating profile certs ...
	I1029 09:09:56.572435  308587 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/old-k8s-version-096492/client.key
	I1029 09:09:56.572518  308587 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/old-k8s-version-096492/apiserver.key.8c4b5e91
	I1029 09:09:56.572582  308587 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/old-k8s-version-096492/proxy-client.key
	I1029 09:09:56.572753  308587 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem (1338 bytes)
	W1029 09:09:56.572795  308587 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218_empty.pem, impossibly tiny 0 bytes
	I1029 09:09:56.572809  308587 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem (1675 bytes)
	I1029 09:09:56.572840  308587 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem (1078 bytes)
	I1029 09:09:56.572885  308587 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:09:56.572919  308587 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem (1679 bytes)
	I1029 09:09:56.572972  308587 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:09:56.573725  308587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:09:56.596105  308587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:09:56.618658  308587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:09:56.638430  308587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1029 09:09:56.659866  308587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/old-k8s-version-096492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1029 09:09:56.688205  308587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/old-k8s-version-096492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1029 09:09:56.709647  308587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/old-k8s-version-096492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:09:56.729771  308587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/old-k8s-version-096492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 09:09:56.749874  308587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:09:56.769347  308587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem --> /usr/share/ca-certificates/7218.pem (1338 bytes)
	I1029 09:09:56.788272  308587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /usr/share/ca-certificates/72182.pem (1708 bytes)
	I1029 09:09:56.809049  308587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:09:56.824329  308587 ssh_runner.go:195] Run: openssl version
	I1029 09:09:56.832540  308587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72182.pem && ln -fs /usr/share/ca-certificates/72182.pem /etc/ssl/certs/72182.pem"
	I1029 09:09:56.841596  308587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72182.pem
	I1029 09:09:56.845525  308587 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:26 /usr/share/ca-certificates/72182.pem
	I1029 09:09:56.845577  308587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72182.pem
	I1029 09:09:56.883555  308587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72182.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:09:56.892730  308587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:09:56.901899  308587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:09:56.906328  308587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:09:56.906393  308587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:09:56.946591  308587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:09:56.956818  308587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7218.pem && ln -fs /usr/share/ca-certificates/7218.pem /etc/ssl/certs/7218.pem"
	I1029 09:09:56.967908  308587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7218.pem
	I1029 09:09:56.973386  308587 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:26 /usr/share/ca-certificates/7218.pem
	I1029 09:09:56.973476  308587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7218.pem
	I1029 09:09:57.020310  308587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7218.pem /etc/ssl/certs/51391683.0"
	I1029 09:09:57.028693  308587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:09:57.032711  308587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:09:57.075879  308587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:09:57.121072  308587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:09:57.164605  308587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:09:57.216933  308587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:09:57.258973  308587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:09:57.297179  308587 kubeadm.go:401] StartCluster: {Name:old-k8s-version-096492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-096492 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:09:57.297281  308587 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:09:57.297336  308587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:09:57.328150  308587 cri.go:89] found id: "eb75fa40098e331528b7e94c2e2a71c766423c56a220a24eaaa69f66efdce4b6"
	I1029 09:09:57.328172  308587 cri.go:89] found id: "d92dd056da0fc02c82efa03b27cf291df638b891640d4514b8dea24f11e44842"
	I1029 09:09:57.328187  308587 cri.go:89] found id: "f75d2e46364d0954bc8168a45bbf13f9854e2c28802b489937d6d807e197c25c"
	I1029 09:09:57.328192  308587 cri.go:89] found id: "3c2ce552cdf8c320285c2bb9f072826ac4a862ddc09798713d1491913854ccfa"
	I1029 09:09:57.328197  308587 cri.go:89] found id: ""
	I1029 09:09:57.328251  308587 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:09:57.341584  308587 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:09:57Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:09:57.341645  308587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:09:57.350656  308587 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:09:57.350677  308587 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:09:57.350723  308587 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:09:57.358455  308587 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:09:57.358977  308587 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-096492" does not appear in /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:09:57.359298  308587 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-3727/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-096492" cluster setting kubeconfig missing "old-k8s-version-096492" context setting]
	I1029 09:09:57.359890  308587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:09:57.361163  308587 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:09:57.369480  308587 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1029 09:09:57.369516  308587 kubeadm.go:602] duration metric: took 18.828806ms to restartPrimaryControlPlane
	I1029 09:09:57.369526  308587 kubeadm.go:403] duration metric: took 72.354852ms to StartCluster
	I1029 09:09:57.369545  308587 settings.go:142] acquiring lock: {Name:mk07eebd81bddcab3dc3d429be8b09770a1732f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:09:57.369613  308587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:09:57.370769  308587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:09:57.371039  308587 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:09:57.371109  308587 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:09:57.371211  308587 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-096492"
	I1029 09:09:57.371213  308587 config.go:182] Loaded profile config "old-k8s-version-096492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1029 09:09:57.371228  308587 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-096492"
	I1029 09:09:57.371231  308587 addons.go:70] Setting dashboard=true in profile "old-k8s-version-096492"
	I1029 09:09:57.371255  308587 addons.go:239] Setting addon dashboard=true in "old-k8s-version-096492"
	W1029 09:09:57.371237  308587 addons.go:248] addon storage-provisioner should already be in state true
	W1029 09:09:57.371264  308587 addons.go:248] addon dashboard should already be in state true
	I1029 09:09:57.371284  308587 host.go:66] Checking if "old-k8s-version-096492" exists ...
	I1029 09:09:57.371294  308587 host.go:66] Checking if "old-k8s-version-096492" exists ...
	I1029 09:09:57.371246  308587 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-096492"
	I1029 09:09:57.371324  308587 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-096492"
	I1029 09:09:57.371641  308587 cli_runner.go:164] Run: docker container inspect old-k8s-version-096492 --format={{.State.Status}}
	I1029 09:09:57.371805  308587 cli_runner.go:164] Run: docker container inspect old-k8s-version-096492 --format={{.State.Status}}
	I1029 09:09:57.371809  308587 cli_runner.go:164] Run: docker container inspect old-k8s-version-096492 --format={{.State.Status}}
	I1029 09:09:57.373785  308587 out.go:179] * Verifying Kubernetes components...
	I1029 09:09:57.375056  308587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:09:57.397224  308587 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-096492"
	W1029 09:09:57.397249  308587 addons.go:248] addon default-storageclass should already be in state true
	I1029 09:09:57.397276  308587 host.go:66] Checking if "old-k8s-version-096492" exists ...
	I1029 09:09:57.397777  308587 cli_runner.go:164] Run: docker container inspect old-k8s-version-096492 --format={{.State.Status}}
	I1029 09:09:57.398607  308587 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1029 09:09:57.400399  308587 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:09:57.401770  308587 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1029 09:09:57.401840  308587 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:09:57.401859  308587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:09:57.401933  308587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-096492
	I1029 09:09:57.402848  308587 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1029 09:09:57.402866  308587 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1029 09:09:57.402917  308587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-096492
	I1029 09:09:57.431394  308587 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:09:57.431420  308587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:09:57.431501  308587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-096492
	I1029 09:09:57.438024  308587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/old-k8s-version-096492/id_rsa Username:docker}
	I1029 09:09:57.439336  308587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/old-k8s-version-096492/id_rsa Username:docker}
	I1029 09:09:57.456089  308587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/old-k8s-version-096492/id_rsa Username:docker}
	I1029 09:09:57.518311  308587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:09:57.531710  308587 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-096492" to be "Ready" ...
	I1029 09:09:57.551513  308587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:09:57.556292  308587 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1029 09:09:57.556311  308587 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1029 09:09:57.572683  308587 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1029 09:09:57.572707  308587 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1029 09:09:57.574789  308587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:09:57.591234  308587 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1029 09:09:57.591255  308587 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1029 09:09:57.606283  308587 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1029 09:09:57.606306  308587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1029 09:09:57.620657  308587 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1029 09:09:57.620681  308587 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1029 09:09:57.638894  308587 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1029 09:09:57.638929  308587 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1029 09:09:57.654197  308587 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1029 09:09:57.654222  308587 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1029 09:09:57.667232  308587 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1029 09:09:57.667257  308587 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1029 09:09:57.680197  308587 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:09:57.680222  308587 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1029 09:09:57.696064  308587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:09:55.922944  310203 out.go:252] * Restarting existing docker container for "embed-certs-834228" ...
	I1029 09:09:55.923025  310203 cli_runner.go:164] Run: docker start embed-certs-834228
	I1029 09:09:56.205846  310203 cli_runner.go:164] Run: docker container inspect embed-certs-834228 --format={{.State.Status}}
	I1029 09:09:56.231182  310203 kic.go:430] container "embed-certs-834228" state is running.
	I1029 09:09:56.231660  310203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-834228
	I1029 09:09:56.256844  310203 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/embed-certs-834228/config.json ...
	I1029 09:09:56.257192  310203 machine.go:94] provisionDockerMachine start ...
	I1029 09:09:56.257279  310203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834228
	I1029 09:09:56.280323  310203 main.go:143] libmachine: Using SSH client type: native
	I1029 09:09:56.280661  310203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1029 09:09:56.280682  310203 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:09:56.281324  310203 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57056->127.0.0.1:33113: read: connection reset by peer
	I1029 09:09:59.431128  310203 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-834228
	
	I1029 09:09:59.431160  310203 ubuntu.go:182] provisioning hostname "embed-certs-834228"
	I1029 09:09:59.431222  310203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834228
	I1029 09:09:59.454840  310203 main.go:143] libmachine: Using SSH client type: native
	I1029 09:09:59.455079  310203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1029 09:09:59.455094  310203 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-834228 && echo "embed-certs-834228" | sudo tee /etc/hostname
	I1029 09:09:59.612868  310203 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-834228
	
	I1029 09:09:59.612949  310203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834228
	I1029 09:09:59.632575  310203 main.go:143] libmachine: Using SSH client type: native
	I1029 09:09:59.632798  310203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1029 09:09:59.632818  310203 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-834228' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-834228/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-834228' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:09:59.782227  310203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:09:59.782260  310203 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-3727/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-3727/.minikube}
	I1029 09:09:59.782311  310203 ubuntu.go:190] setting up certificates
	I1029 09:09:59.782331  310203 provision.go:84] configureAuth start
	I1029 09:09:59.782400  310203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-834228
	I1029 09:09:59.803230  310203 provision.go:143] copyHostCerts
	I1029 09:09:59.803313  310203 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem, removing ...
	I1029 09:09:59.803330  310203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem
	I1029 09:09:59.803435  310203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem (1078 bytes)
	I1029 09:09:59.803591  310203 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem, removing ...
	I1029 09:09:59.803605  310203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem
	I1029 09:09:59.803649  310203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem (1123 bytes)
	I1029 09:09:59.803764  310203 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem, removing ...
	I1029 09:09:59.803774  310203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem
	I1029 09:09:59.803812  310203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem (1679 bytes)
	I1029 09:09:59.803904  310203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem org=jenkins.embed-certs-834228 san=[127.0.0.1 192.168.76.2 embed-certs-834228 localhost minikube]
	I1029 09:09:59.995061  310203 provision.go:177] copyRemoteCerts
	I1029 09:09:59.995114  310203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:09:59.995148  310203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834228
	I1029 09:10:00.016471  310203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/embed-certs-834228/id_rsa Username:docker}
	I1029 09:10:00.140802  310203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1029 09:10:00.163141  310203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1029 09:10:00.182318  310203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:10:00.200457  310203 provision.go:87] duration metric: took 418.107546ms to configureAuth
	I1029 09:10:00.200493  310203 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:10:00.200724  310203 config.go:182] Loaded profile config "embed-certs-834228": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:00.200842  310203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834228
	I1029 09:10:00.220698  310203 main.go:143] libmachine: Using SSH client type: native
	I1029 09:10:00.221070  310203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1029 09:10:00.221096  310203 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:10:00.617209  310203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:10:00.617234  310203 machine.go:97] duration metric: took 4.360021903s to provisionDockerMachine
	I1029 09:10:00.617248  310203 start.go:293] postStartSetup for "embed-certs-834228" (driver="docker")
	I1029 09:10:00.617261  310203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:10:00.617321  310203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:10:00.617373  310203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834228
	I1029 09:10:00.643389  310203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/embed-certs-834228/id_rsa Username:docker}
	I1029 09:09:56.396907  310655 out.go:252] * Restarting existing docker container for "no-preload-043790" ...
	I1029 09:09:56.397011  310655 cli_runner.go:164] Run: docker start no-preload-043790
	I1029 09:09:56.695285  310655 cli_runner.go:164] Run: docker container inspect no-preload-043790 --format={{.State.Status}}
	I1029 09:09:56.715238  310655 kic.go:430] container "no-preload-043790" state is running.
	I1029 09:09:56.715624  310655 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-043790
	I1029 09:09:56.736040  310655 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/no-preload-043790/config.json ...
	I1029 09:09:56.736330  310655 machine.go:94] provisionDockerMachine start ...
	I1029 09:09:56.736430  310655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-043790
	I1029 09:09:56.756381  310655 main.go:143] libmachine: Using SSH client type: native
	I1029 09:09:56.756687  310655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1029 09:09:56.756705  310655 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:09:56.757328  310655 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35908->127.0.0.1:33118: read: connection reset by peer
	I1029 09:09:59.922068  310655 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-043790
	
	I1029 09:09:59.922106  310655 ubuntu.go:182] provisioning hostname "no-preload-043790"
	I1029 09:09:59.922167  310655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-043790
	I1029 09:09:59.952119  310655 main.go:143] libmachine: Using SSH client type: native
	I1029 09:09:59.952344  310655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1029 09:09:59.952358  310655 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-043790 && echo "no-preload-043790" | sudo tee /etc/hostname
	I1029 09:10:00.123966  310655 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-043790
	
	I1029 09:10:00.124103  310655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-043790
	I1029 09:10:00.148070  310655 main.go:143] libmachine: Using SSH client type: native
	I1029 09:10:00.148362  310655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1029 09:10:00.148391  310655 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-043790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-043790/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-043790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:10:00.297648  310655 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:10:00.297680  310655 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-3727/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-3727/.minikube}
	I1029 09:10:00.297718  310655 ubuntu.go:190] setting up certificates
	I1029 09:10:00.297731  310655 provision.go:84] configureAuth start
	I1029 09:10:00.297792  310655 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-043790
	I1029 09:10:00.316968  310655 provision.go:143] copyHostCerts
	I1029 09:10:00.317052  310655 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem, removing ...
	I1029 09:10:00.317069  310655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem
	I1029 09:10:00.317129  310655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem (1078 bytes)
	I1029 09:10:00.317216  310655 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem, removing ...
	I1029 09:10:00.317224  310655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem
	I1029 09:10:00.317245  310655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem (1123 bytes)
	I1029 09:10:00.317295  310655 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem, removing ...
	I1029 09:10:00.317302  310655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem
	I1029 09:10:00.317321  310655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem (1679 bytes)
	I1029 09:10:00.317373  310655 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem org=jenkins.no-preload-043790 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-043790]
	I1029 09:10:00.512745  310655 provision.go:177] copyRemoteCerts
	I1029 09:10:00.512823  310655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:10:00.512884  310655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-043790
	I1029 09:10:00.537319  310655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/no-preload-043790/id_rsa Username:docker}
	I1029 09:10:00.657959  310655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1029 09:10:00.691596  310655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1029 09:10:00.719944  310655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1029 09:10:00.744263  310655 provision.go:87] duration metric: took 446.513439ms to configureAuth
	I1029 09:10:00.744295  310655 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:10:00.744539  310655 config.go:182] Loaded profile config "no-preload-043790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:00.744663  310655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-043790
	I1029 09:10:00.770699  310655 main.go:143] libmachine: Using SSH client type: native
	I1029 09:10:00.771010  310655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1029 09:10:00.771040  310655 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:10:00.087001  308587 node_ready.go:49] node "old-k8s-version-096492" is "Ready"
	I1029 09:10:00.087034  308587 node_ready.go:38] duration metric: took 2.555289877s for node "old-k8s-version-096492" to be "Ready" ...
	I1029 09:10:00.087052  308587 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:10:00.087112  308587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:10:00.833881  308587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.282328228s)
	I1029 09:10:00.833973  308587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.259152578s)
	I1029 09:10:01.248829  308587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.552719585s)
	I1029 09:10:01.248878  308587 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.161745899s)
	I1029 09:10:01.248915  308587 api_server.go:72] duration metric: took 3.877845353s to wait for apiserver process to appear ...
	I1029 09:10:01.249067  308587 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:10:01.249101  308587 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:10:01.250955  308587 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-096492 addons enable metrics-server
	
	I1029 09:10:01.252280  308587 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1029 09:09:58.827708  302556 node_ready.go:57] node "default-k8s-diff-port-017274" has "Ready":"False" status (will retry)
	W1029 09:10:00.829571  302556 node_ready.go:57] node "default-k8s-diff-port-017274" has "Ready":"False" status (will retry)
	I1029 09:10:00.767152  310203 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:10:00.773321  310203 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:10:00.773357  310203 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:10:00.773371  310203 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/addons for local assets ...
	I1029 09:10:00.773420  310203 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/files for local assets ...
	I1029 09:10:00.773550  310203 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem -> 72182.pem in /etc/ssl/certs
	I1029 09:10:00.773705  310203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:10:00.783153  310203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:10:00.805595  310203 start.go:296] duration metric: took 188.334044ms for postStartSetup
	I1029 09:10:00.805682  310203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:10:00.805726  310203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834228
	I1029 09:10:00.832587  310203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/embed-certs-834228/id_rsa Username:docker}
	I1029 09:10:00.936848  310203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:10:00.941745  310203 fix.go:56] duration metric: took 5.04347576s for fixHost
	I1029 09:10:00.941778  310203 start.go:83] releasing machines lock for "embed-certs-834228", held for 5.04355168s
	I1029 09:10:00.941848  310203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-834228
	I1029 09:10:00.961792  310203 ssh_runner.go:195] Run: cat /version.json
	I1029 09:10:00.961852  310203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834228
	I1029 09:10:00.961853  310203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:10:00.962029  310203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834228
	I1029 09:10:00.986825  310203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/embed-certs-834228/id_rsa Username:docker}
	I1029 09:10:00.987513  310203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/embed-certs-834228/id_rsa Username:docker}
	I1029 09:10:01.196837  310203 ssh_runner.go:195] Run: systemctl --version
	I1029 09:10:01.205383  310203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:10:01.248570  310203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:10:01.255117  310203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:10:01.255207  310203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:10:01.264896  310203 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:10:01.264921  310203 start.go:496] detecting cgroup driver to use...
	I1029 09:10:01.264956  310203 detect.go:190] detected "systemd" cgroup driver on host os
	I1029 09:10:01.265013  310203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:10:01.283889  310203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:10:01.299891  310203 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:10:01.299953  310203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:10:01.316667  310203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:10:01.331674  310203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:10:01.425793  310203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:10:01.519129  310203 docker.go:234] disabling docker service ...
	I1029 09:10:01.519190  310203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:10:01.534703  310203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:10:01.548451  310203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:10:01.633597  310203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:10:01.723354  310203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:10:01.739252  310203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:10:01.757834  310203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:10:01.757901  310203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:01.769579  310203 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1029 09:10:01.769657  310203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:01.779671  310203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:01.790633  310203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:01.801022  310203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:10:01.809580  310203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:01.819764  310203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:01.834694  310203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:01.845773  310203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:10:01.854334  310203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:10:01.862636  310203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:10:01.960570  310203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:10:02.079179  310203 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:10:02.079253  310203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:10:02.083860  310203 start.go:564] Will wait 60s for crictl version
	I1029 09:10:02.083928  310203 ssh_runner.go:195] Run: which crictl
	I1029 09:10:02.088393  310203 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:10:02.119499  310203 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:10:02.119584  310203 ssh_runner.go:195] Run: crio --version
	I1029 09:10:02.157905  310203 ssh_runner.go:195] Run: crio --version
	I1029 09:10:02.197418  310203 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:10:01.159442  310655 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:10:01.159571  310655 machine.go:97] duration metric: took 4.423220962s to provisionDockerMachine
	I1029 09:10:01.159588  310655 start.go:293] postStartSetup for "no-preload-043790" (driver="docker")
	I1029 09:10:01.159607  310655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:10:01.159685  310655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:10:01.159808  310655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-043790
	I1029 09:10:01.186171  310655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/no-preload-043790/id_rsa Username:docker}
	I1029 09:10:01.297188  310655 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:10:01.301681  310655 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:10:01.301715  310655 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:10:01.301728  310655 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/addons for local assets ...
	I1029 09:10:01.301787  310655 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/files for local assets ...
	I1029 09:10:01.301882  310655 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem -> 72182.pem in /etc/ssl/certs
	I1029 09:10:01.302057  310655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:10:01.310349  310655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:10:01.331394  310655 start.go:296] duration metric: took 171.784477ms for postStartSetup
	I1029 09:10:01.331478  310655 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:10:01.331527  310655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-043790
	I1029 09:10:01.354200  310655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/no-preload-043790/id_rsa Username:docker}
	I1029 09:10:01.464071  310655 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:10:01.468935  310655 fix.go:56] duration metric: took 5.095654895s for fixHost
	I1029 09:10:01.468978  310655 start.go:83] releasing machines lock for "no-preload-043790", held for 5.095717647s
	I1029 09:10:01.469120  310655 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-043790
	I1029 09:10:01.487810  310655 ssh_runner.go:195] Run: cat /version.json
	I1029 09:10:01.487857  310655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:10:01.487872  310655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-043790
	I1029 09:10:01.487928  310655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-043790
	I1029 09:10:01.508796  310655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/no-preload-043790/id_rsa Username:docker}
	I1029 09:10:01.509118  310655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/no-preload-043790/id_rsa Username:docker}
	I1029 09:10:01.667051  310655 ssh_runner.go:195] Run: systemctl --version
	I1029 09:10:01.675181  310655 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:10:01.716085  310655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:10:01.721511  310655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:10:01.721589  310655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:10:01.731775  310655 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:10:01.731798  310655 start.go:496] detecting cgroup driver to use...
	I1029 09:10:01.731830  310655 detect.go:190] detected "systemd" cgroup driver on host os
	I1029 09:10:01.731875  310655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:10:01.751096  310655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:10:01.767465  310655 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:10:01.767551  310655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:10:01.784970  310655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:10:01.800656  310655 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:10:01.886356  310655 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:10:01.994091  310655 docker.go:234] disabling docker service ...
	I1029 09:10:01.994160  310655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:10:02.010921  310655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:10:02.025105  310655 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:10:02.120336  310655 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:10:02.219111  310655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:10:02.232514  310655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:10:02.248502  310655 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:10:02.248552  310655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:02.259070  310655 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1029 09:10:02.259139  310655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:02.269707  310655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:02.279250  310655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:02.290826  310655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:10:02.301505  310655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:02.311635  310655 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:02.321524  310655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:02.332263  310655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:10:02.340550  310655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:10:02.349835  310655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:10:02.435584  310655 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:10:02.559113  310655 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:10:02.559186  310655 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:10:02.564246  310655 start.go:564] Will wait 60s for crictl version
	I1029 09:10:02.564315  310655 ssh_runner.go:195] Run: which crictl
	I1029 09:10:02.568876  310655 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:10:02.598503  310655 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:10:02.598599  310655 ssh_runner.go:195] Run: crio --version
	I1029 09:10:02.639065  310655 ssh_runner.go:195] Run: crio --version
	I1029 09:10:02.672227  310655 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:10:02.198684  310203 cli_runner.go:164] Run: docker network inspect embed-certs-834228 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:10:02.218477  310203 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1029 09:10:02.223139  310203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:10:02.234312  310203 kubeadm.go:884] updating cluster {Name:embed-certs-834228 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834228 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:10:02.234424  310203 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:10:02.234493  310203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:10:02.268823  310203 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:10:02.268852  310203 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:10:02.268909  310203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:10:02.297694  310203 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:10:02.297725  310203 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:10:02.297735  310203 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1029 09:10:02.297855  310203 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-834228 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834228 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:10:02.297934  310203 ssh_runner.go:195] Run: crio config
	I1029 09:10:02.346882  310203 cni.go:84] Creating CNI manager for ""
	I1029 09:10:02.346907  310203 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:10:02.346925  310203 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:10:02.346952  310203 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-834228 NodeName:embed-certs-834228 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:10:02.347127  310203 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-834228"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:10:02.347196  310203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:10:02.355872  310203 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:10:02.355936  310203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:10:02.364271  310203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1029 09:10:02.380852  310203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:10:02.396716  310203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1029 09:10:02.410379  310203 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:10:02.414554  310203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:10:02.425229  310203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:10:02.510960  310203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:10:02.538016  310203 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/embed-certs-834228 for IP: 192.168.76.2
	I1029 09:10:02.538038  310203 certs.go:195] generating shared ca certs ...
	I1029 09:10:02.538058  310203 certs.go:227] acquiring lock for ca certs: {Name:mk2fcaaead4b0fcf1dc2cfc80d95b3cc12092f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:02.538206  310203 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key
	I1029 09:10:02.538258  310203 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key
	I1029 09:10:02.538280  310203 certs.go:257] generating profile certs ...
	I1029 09:10:02.538376  310203 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/embed-certs-834228/client.key
	I1029 09:10:02.538454  310203 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/embed-certs-834228/apiserver.key.08bfc10c
	I1029 09:10:02.538531  310203 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/embed-certs-834228/proxy-client.key
	I1029 09:10:02.538674  310203 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem (1338 bytes)
	W1029 09:10:02.538720  310203 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218_empty.pem, impossibly tiny 0 bytes
	I1029 09:10:02.538734  310203 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem (1675 bytes)
	I1029 09:10:02.538775  310203 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem (1078 bytes)
	I1029 09:10:02.538810  310203 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:10:02.538847  310203 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem (1679 bytes)
	I1029 09:10:02.538905  310203 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:10:02.539691  310203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:10:02.562817  310203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:10:02.583334  310203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:10:02.607115  310203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1029 09:10:02.634564  310203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/embed-certs-834228/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1029 09:10:02.656159  310203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/embed-certs-834228/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:10:02.676058  310203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/embed-certs-834228/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:10:02.696746  310203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/embed-certs-834228/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:10:02.715880  310203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /usr/share/ca-certificates/72182.pem (1708 bytes)
	I1029 09:10:02.735947  310203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:10:02.758169  310203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem --> /usr/share/ca-certificates/7218.pem (1338 bytes)
	I1029 09:10:02.778093  310203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:10:02.792753  310203 ssh_runner.go:195] Run: openssl version
	I1029 09:10:02.800455  310203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72182.pem && ln -fs /usr/share/ca-certificates/72182.pem /etc/ssl/certs/72182.pem"
	I1029 09:10:02.810380  310203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72182.pem
	I1029 09:10:02.814903  310203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:26 /usr/share/ca-certificates/72182.pem
	I1029 09:10:02.814959  310203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72182.pem
	I1029 09:10:02.853049  310203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72182.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:10:02.862251  310203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:10:02.871308  310203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:10:02.875271  310203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:10:02.875319  310203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:10:02.920516  310203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:10:02.929717  310203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7218.pem && ln -fs /usr/share/ca-certificates/7218.pem /etc/ssl/certs/7218.pem"
	I1029 09:10:02.939501  310203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7218.pem
	I1029 09:10:02.943962  310203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:26 /usr/share/ca-certificates/7218.pem
	I1029 09:10:02.944055  310203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7218.pem
	I1029 09:10:02.979598  310203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7218.pem /etc/ssl/certs/51391683.0"
	I1029 09:10:02.988795  310203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:10:02.993181  310203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:10:03.040228  310203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:10:03.099542  310203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:10:03.147592  310203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:10:03.203142  310203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:10:03.256729  310203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:10:03.306750  310203 kubeadm.go:401] StartCluster: {Name:embed-certs-834228 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834228 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:10:03.306872  310203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:10:03.306925  310203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:10:03.342951  310203 cri.go:89] found id: "66aa912baa9af98f94ecb5b252508da4dacdaa895aab155c9bbd90f2b07a6d67"
	I1029 09:10:03.342974  310203 cri.go:89] found id: "b1e012893324df4a540415d2a2a886bc9306d87f1be54870a37e70562f009661"
	I1029 09:10:03.342980  310203 cri.go:89] found id: "0d384ad349a4f9e1f45c716a0c367d307afd1e671eccf883335f5764690e871a"
	I1029 09:10:03.342984  310203 cri.go:89] found id: "f516353885ecbd2eb5072fd9daac8f0cc0f088a1992d0c02fe4ca4ec5d2f5393"
	I1029 09:10:03.342999  310203 cri.go:89] found id: ""
	I1029 09:10:03.343053  310203 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:10:03.356229  310203 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:03Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:10:03.356298  310203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:10:03.365379  310203 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:10:03.365400  310203 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:10:03.365452  310203 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:10:03.373374  310203 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:10:03.374234  310203 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-834228" does not appear in /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:03.374772  310203 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-3727/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-834228" cluster setting kubeconfig missing "embed-certs-834228" context setting]
	I1029 09:10:03.375414  310203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:03.376859  310203 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:10:03.385772  310203 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1029 09:10:03.385807  310203 kubeadm.go:602] duration metric: took 20.399904ms to restartPrimaryControlPlane
	I1029 09:10:03.385820  310203 kubeadm.go:403] duration metric: took 79.075642ms to StartCluster
	I1029 09:10:03.385837  310203 settings.go:142] acquiring lock: {Name:mk07eebd81bddcab3dc3d429be8b09770a1732f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:03.385908  310203 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:03.387928  310203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:03.388238  310203 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:10:03.388406  310203 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:10:03.388528  310203 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-834228"
	I1029 09:10:03.388551  310203 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-834228"
	W1029 09:10:03.388564  310203 addons.go:248] addon storage-provisioner should already be in state true
	I1029 09:10:03.388576  310203 addons.go:70] Setting dashboard=true in profile "embed-certs-834228"
	I1029 09:10:03.388592  310203 host.go:66] Checking if "embed-certs-834228" exists ...
	I1029 09:10:03.388594  310203 addons.go:239] Setting addon dashboard=true in "embed-certs-834228"
	W1029 09:10:03.388606  310203 addons.go:248] addon dashboard should already be in state true
	I1029 09:10:03.388638  310203 config.go:182] Loaded profile config "embed-certs-834228": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:03.388661  310203 host.go:66] Checking if "embed-certs-834228" exists ...
	I1029 09:10:03.388690  310203 addons.go:70] Setting default-storageclass=true in profile "embed-certs-834228"
	I1029 09:10:03.388704  310203 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-834228"
	I1029 09:10:03.389028  310203 cli_runner.go:164] Run: docker container inspect embed-certs-834228 --format={{.State.Status}}
	I1029 09:10:03.389105  310203 cli_runner.go:164] Run: docker container inspect embed-certs-834228 --format={{.State.Status}}
	I1029 09:10:03.389295  310203 cli_runner.go:164] Run: docker container inspect embed-certs-834228 --format={{.State.Status}}
	I1029 09:10:03.391600  310203 out.go:179] * Verifying Kubernetes components...
	I1029 09:10:03.392846  310203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:10:03.417197  310203 addons.go:239] Setting addon default-storageclass=true in "embed-certs-834228"
	W1029 09:10:03.417266  310203 addons.go:248] addon default-storageclass should already be in state true
	I1029 09:10:03.417296  310203 host.go:66] Checking if "embed-certs-834228" exists ...
	I1029 09:10:03.417917  310203 cli_runner.go:164] Run: docker container inspect embed-certs-834228 --format={{.State.Status}}
	I1029 09:10:03.420328  310203 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:10:03.421357  310203 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1029 09:10:03.421446  310203 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:10:03.421465  310203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:10:03.421521  310203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834228
	I1029 09:10:03.423708  310203 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1029 09:10:02.673573  310655 cli_runner.go:164] Run: docker network inspect no-preload-043790 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:10:02.694520  310655 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1029 09:10:02.699226  310655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:10:02.710072  310655 kubeadm.go:884] updating cluster {Name:no-preload-043790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-043790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:10:02.710216  310655 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:10:02.710263  310655 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:10:02.743164  310655 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:10:02.743191  310655 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:10:02.743201  310655 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1029 09:10:02.743322  310655 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-043790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-043790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:10:02.743403  310655 ssh_runner.go:195] Run: crio config
	I1029 09:10:02.794137  310655 cni.go:84] Creating CNI manager for ""
	I1029 09:10:02.794166  310655 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:10:02.794187  310655 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:10:02.794216  310655 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-043790 NodeName:no-preload-043790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:10:02.794385  310655 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-043790"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:10:02.794461  310655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:10:02.804329  310655 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:10:02.804383  310655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:10:02.812660  310655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1029 09:10:02.826537  310655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:10:02.840626  310655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1029 09:10:02.855102  310655 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:10:02.859176  310655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:10:02.870219  310655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:10:02.963092  310655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:10:02.986663  310655 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/no-preload-043790 for IP: 192.168.94.2
	I1029 09:10:02.986692  310655 certs.go:195] generating shared ca certs ...
	I1029 09:10:02.986707  310655 certs.go:227] acquiring lock for ca certs: {Name:mk2fcaaead4b0fcf1dc2cfc80d95b3cc12092f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:02.986866  310655 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key
	I1029 09:10:02.986929  310655 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key
	I1029 09:10:02.986943  310655 certs.go:257] generating profile certs ...
	I1029 09:10:02.987082  310655 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/no-preload-043790/client.key
	I1029 09:10:02.987142  310655 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/no-preload-043790/apiserver.key.554ea174
	I1029 09:10:02.987175  310655 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/no-preload-043790/proxy-client.key
	I1029 09:10:02.987301  310655 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem (1338 bytes)
	W1029 09:10:02.987332  310655 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218_empty.pem, impossibly tiny 0 bytes
	I1029 09:10:02.987341  310655 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem (1675 bytes)
	I1029 09:10:02.987368  310655 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem (1078 bytes)
	I1029 09:10:02.987401  310655 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:10:02.987436  310655 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem (1679 bytes)
	I1029 09:10:02.987475  310655 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:10:02.988075  310655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:10:03.010614  310655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:10:03.033671  310655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:10:03.056654  310655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1029 09:10:03.085647  310655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/no-preload-043790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1029 09:10:03.112102  310655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/no-preload-043790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1029 09:10:03.134732  310655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/no-preload-043790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:10:03.159731  310655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/no-preload-043790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:10:03.184880  310655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem --> /usr/share/ca-certificates/7218.pem (1338 bytes)
	I1029 09:10:03.212923  310655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /usr/share/ca-certificates/72182.pem (1708 bytes)
	I1029 09:10:03.244566  310655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:10:03.269126  310655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:10:03.284973  310655 ssh_runner.go:195] Run: openssl version
	I1029 09:10:03.293437  310655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72182.pem && ln -fs /usr/share/ca-certificates/72182.pem /etc/ssl/certs/72182.pem"
	I1029 09:10:03.305172  310655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72182.pem
	I1029 09:10:03.310207  310655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:26 /usr/share/ca-certificates/72182.pem
	I1029 09:10:03.310260  310655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72182.pem
	I1029 09:10:03.353603  310655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72182.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:10:03.363201  310655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:10:03.373392  310655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:10:03.377609  310655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:10:03.377672  310655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:10:03.437599  310655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:10:03.456040  310655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7218.pem && ln -fs /usr/share/ca-certificates/7218.pem /etc/ssl/certs/7218.pem"
	I1029 09:10:03.468132  310655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7218.pem
	I1029 09:10:03.474928  310655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:26 /usr/share/ca-certificates/7218.pem
	I1029 09:10:03.475066  310655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7218.pem
	I1029 09:10:03.534684  310655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7218.pem /etc/ssl/certs/51391683.0"
	I1029 09:10:03.548765  310655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:10:03.554402  310655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:10:03.622881  310655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:10:03.701262  310655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:10:03.774112  310655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:10:03.837967  310655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:10:03.909234  310655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:10:03.978504  310655 kubeadm.go:401] StartCluster: {Name:no-preload-043790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-043790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:10:03.978606  310655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:10:03.978677  310655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:10:04.028929  310655 cri.go:89] found id: "90320debdab793d8acd7009f9643c60d73f0cb96a8b824f6fde5cdeab7a2d1c0"
	I1029 09:10:04.028955  310655 cri.go:89] found id: "19319884348b2e0458cd97dc51733e41464962a2500ec16c77771a98ba4e8b27"
	I1029 09:10:04.028961  310655 cri.go:89] found id: "aef6cdacaff629417a19cf93c9fdd05bdebca3a660634d42e64d3d9b50f6be3b"
	I1029 09:10:04.028966  310655 cri.go:89] found id: "a8e01bc837509e1a7e1a5c19a35ea64e574acd55a0d06c30f68441a4dc29ff7c"
	I1029 09:10:04.028970  310655 cri.go:89] found id: ""
	I1029 09:10:04.029043  310655 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:10:04.049787  310655 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:04Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:10:04.049913  310655 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:10:04.066732  310655 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:10:04.066752  310655 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:10:04.066812  310655 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:10:04.082233  310655 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:10:04.083759  310655 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-043790" does not appear in /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:04.084912  310655 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-3727/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-043790" cluster setting kubeconfig missing "no-preload-043790" context setting]
	I1029 09:10:04.086560  310655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:04.088952  310655 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:10:04.101757  310655 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1029 09:10:04.101869  310655 kubeadm.go:602] duration metric: took 35.110056ms to restartPrimaryControlPlane
	I1029 09:10:04.101879  310655 kubeadm.go:403] duration metric: took 123.382721ms to StartCluster
	I1029 09:10:04.101898  310655 settings.go:142] acquiring lock: {Name:mk07eebd81bddcab3dc3d429be8b09770a1732f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:04.101972  310655 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:04.104663  310655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:04.104933  310655 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:10:04.105286  310655 config.go:182] Loaded profile config "no-preload-043790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:04.105221  310655 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:10:04.105345  310655 addons.go:70] Setting storage-provisioner=true in profile "no-preload-043790"
	I1029 09:10:04.105366  310655 addons.go:239] Setting addon storage-provisioner=true in "no-preload-043790"
	W1029 09:10:04.105374  310655 addons.go:248] addon storage-provisioner should already be in state true
	I1029 09:10:04.105389  310655 addons.go:70] Setting dashboard=true in profile "no-preload-043790"
	I1029 09:10:04.105423  310655 addons.go:239] Setting addon dashboard=true in "no-preload-043790"
	W1029 09:10:04.105471  310655 addons.go:248] addon dashboard should already be in state true
	I1029 09:10:04.105551  310655 host.go:66] Checking if "no-preload-043790" exists ...
	I1029 09:10:04.105399  310655 host.go:66] Checking if "no-preload-043790" exists ...
	I1029 09:10:04.105394  310655 addons.go:70] Setting default-storageclass=true in profile "no-preload-043790"
	I1029 09:10:04.105792  310655 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-043790"
	I1029 09:10:04.106131  310655 cli_runner.go:164] Run: docker container inspect no-preload-043790 --format={{.State.Status}}
	I1029 09:10:04.106132  310655 cli_runner.go:164] Run: docker container inspect no-preload-043790 --format={{.State.Status}}
	I1029 09:10:04.106230  310655 cli_runner.go:164] Run: docker container inspect no-preload-043790 --format={{.State.Status}}
	I1029 09:10:04.116225  310655 out.go:179] * Verifying Kubernetes components...
	I1029 09:10:04.119078  310655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:10:04.146793  310655 addons.go:239] Setting addon default-storageclass=true in "no-preload-043790"
	W1029 09:10:04.146817  310655 addons.go:248] addon default-storageclass should already be in state true
	I1029 09:10:04.146845  310655 host.go:66] Checking if "no-preload-043790" exists ...
	I1029 09:10:04.149246  310655 cli_runner.go:164] Run: docker container inspect no-preload-043790 --format={{.State.Status}}
	I1029 09:10:04.150392  310655 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:10:04.151620  310655 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:10:04.151640  310655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:10:04.151693  310655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-043790
	I1029 09:10:04.159819  310655 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1029 09:10:04.161295  310655 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1029 09:10:01.253429  308587 addons.go:515] duration metric: took 3.882322646s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1029 09:10:01.254751  308587 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1029 09:10:01.254789  308587 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1029 09:10:01.750151  308587 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:10:01.755117  308587 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:10:01.756703  308587 api_server.go:141] control plane version: v1.28.0
	I1029 09:10:01.756738  308587 api_server.go:131] duration metric: took 507.655423ms to wait for apiserver health ...
	I1029 09:10:01.756749  308587 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:10:01.761319  308587 system_pods.go:59] 8 kube-system pods found
	I1029 09:10:01.761359  308587 system_pods.go:61] "coredns-5dd5756b68-v5mr5" [c73ffe63-3e51-47e1-a466-110f80cedb9d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:10:01.761370  308587 system_pods.go:61] "etcd-old-k8s-version-096492" [0ad9fe51-6f63-4a35-865d-3e464da0e8c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:10:01.761378  308587 system_pods.go:61] "kindnet-7qztm" [6d656d18-bd80-4efa-b002-2e13a052ff06] Running
	I1029 09:10:01.761388  308587 system_pods.go:61] "kube-apiserver-old-k8s-version-096492" [29229f07-69be-442b-bbb8-65374e2538e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:10:01.761400  308587 system_pods.go:61] "kube-controller-manager-old-k8s-version-096492" [6ababf36-84a7-4170-ab15-f56ba7f5d171] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:10:01.761409  308587 system_pods.go:61] "kube-proxy-8kpqf" [34799f5c-3bdd-4fa6-be66-a77a7ebe00f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:10:01.761417  308587 system_pods.go:61] "kube-scheduler-old-k8s-version-096492" [0d43a07e-55e0-41a8-9ddf-a3e9920485f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:10:01.761425  308587 system_pods.go:61] "storage-provisioner" [8e81d736-a277-4ca4-b50e-d930d86ab51e] Running
	I1029 09:10:01.761432  308587 system_pods.go:74] duration metric: took 4.67694ms to wait for pod list to return data ...
	I1029 09:10:01.761445  308587 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:10:01.763862  308587 default_sa.go:45] found service account: "default"
	I1029 09:10:01.763887  308587 default_sa.go:55] duration metric: took 2.435793ms for default service account to be created ...
	I1029 09:10:01.763896  308587 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:10:01.767537  308587 system_pods.go:86] 8 kube-system pods found
	I1029 09:10:01.767567  308587 system_pods.go:89] "coredns-5dd5756b68-v5mr5" [c73ffe63-3e51-47e1-a466-110f80cedb9d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:10:01.767584  308587 system_pods.go:89] "etcd-old-k8s-version-096492" [0ad9fe51-6f63-4a35-865d-3e464da0e8c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:10:01.767595  308587 system_pods.go:89] "kindnet-7qztm" [6d656d18-bd80-4efa-b002-2e13a052ff06] Running
	I1029 09:10:01.767605  308587 system_pods.go:89] "kube-apiserver-old-k8s-version-096492" [29229f07-69be-442b-bbb8-65374e2538e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:10:01.767617  308587 system_pods.go:89] "kube-controller-manager-old-k8s-version-096492" [6ababf36-84a7-4170-ab15-f56ba7f5d171] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:10:01.767627  308587 system_pods.go:89] "kube-proxy-8kpqf" [34799f5c-3bdd-4fa6-be66-a77a7ebe00f8] Running
	I1029 09:10:01.767635  308587 system_pods.go:89] "kube-scheduler-old-k8s-version-096492" [0d43a07e-55e0-41a8-9ddf-a3e9920485f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:10:01.767643  308587 system_pods.go:89] "storage-provisioner" [8e81d736-a277-4ca4-b50e-d930d86ab51e] Running
	I1029 09:10:01.767653  308587 system_pods.go:126] duration metric: took 3.751406ms to wait for k8s-apps to be running ...
	I1029 09:10:01.767666  308587 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:10:01.767708  308587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:01.781403  308587 system_svc.go:56] duration metric: took 13.728271ms WaitForService to wait for kubelet
	I1029 09:10:01.781429  308587 kubeadm.go:587] duration metric: took 4.410360653s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:10:01.781445  308587 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:10:01.784772  308587 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 09:10:01.784799  308587 node_conditions.go:123] node cpu capacity is 8
	I1029 09:10:01.784815  308587 node_conditions.go:105] duration metric: took 3.365173ms to run NodePressure ...
	I1029 09:10:01.784835  308587 start.go:242] waiting for startup goroutines ...
	I1029 09:10:01.784849  308587 start.go:247] waiting for cluster config update ...
	I1029 09:10:01.784864  308587 start.go:256] writing updated cluster config ...
	I1029 09:10:01.785184  308587 ssh_runner.go:195] Run: rm -f paused
	I1029 09:10:01.789189  308587 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:10:01.793937  308587 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-v5mr5" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 09:10:03.800303  308587 pod_ready.go:104] pod "coredns-5dd5756b68-v5mr5" is not "Ready", error: <nil>
	I1029 09:10:03.425077  310203 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1029 09:10:03.425639  310203 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1029 09:10:03.425707  310203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834228
	I1029 09:10:03.461277  310203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/embed-certs-834228/id_rsa Username:docker}
	I1029 09:10:03.462028  310203 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:10:03.462050  310203 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:10:03.462113  310203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834228
	I1029 09:10:03.464303  310203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/embed-certs-834228/id_rsa Username:docker}
	I1029 09:10:03.489387  310203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/embed-certs-834228/id_rsa Username:docker}
	I1029 09:10:03.570825  310203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:10:03.596286  310203 node_ready.go:35] waiting up to 6m0s for node "embed-certs-834228" to be "Ready" ...
	I1029 09:10:03.605645  310203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:10:03.622628  310203 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1029 09:10:03.622663  310203 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1029 09:10:03.638655  310203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:10:03.675691  310203 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1029 09:10:03.675717  310203 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1029 09:10:03.736696  310203 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1029 09:10:03.736726  310203 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1029 09:10:03.766411  310203 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1029 09:10:03.766440  310203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1029 09:10:03.801217  310203 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1029 09:10:03.801243  310203 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1029 09:10:03.826825  310203 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1029 09:10:03.826852  310203 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1029 09:10:03.867187  310203 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1029 09:10:03.867213  310203 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1029 09:10:03.890103  310203 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1029 09:10:03.890140  310203 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1029 09:10:03.915257  310203 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:10:03.915281  310203 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1029 09:10:03.947954  310203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:10:05.632642  310203 node_ready.go:49] node "embed-certs-834228" is "Ready"
	I1029 09:10:05.632682  310203 node_ready.go:38] duration metric: took 2.036350379s for node "embed-certs-834228" to be "Ready" ...
	I1029 09:10:05.632704  310203 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:10:05.632761  310203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:10:04.162319  310655 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1029 09:10:04.162348  310655 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1029 09:10:04.162417  310655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-043790
	I1029 09:10:04.189213  310655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/no-preload-043790/id_rsa Username:docker}
	I1029 09:10:04.191703  310655 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:10:04.191729  310655 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:10:04.191794  310655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-043790
	I1029 09:10:04.219121  310655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/no-preload-043790/id_rsa Username:docker}
	I1029 09:10:04.231782  310655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/no-preload-043790/id_rsa Username:docker}
	I1029 09:10:04.331843  310655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:10:04.337622  310655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:10:04.357534  310655 node_ready.go:35] waiting up to 6m0s for node "no-preload-043790" to be "Ready" ...
	I1029 09:10:04.393124  310655 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1029 09:10:04.393165  310655 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1029 09:10:04.424304  310655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:10:04.433724  310655 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1029 09:10:04.433749  310655 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1029 09:10:04.465940  310655 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1029 09:10:04.465962  310655 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1029 09:10:04.488191  310655 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1029 09:10:04.488211  310655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1029 09:10:04.535859  310655 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1029 09:10:04.535887  310655 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1029 09:10:04.556565  310655 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1029 09:10:04.556658  310655 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1029 09:10:04.577868  310655 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1029 09:10:04.577896  310655 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1029 09:10:04.596746  310655 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1029 09:10:04.596772  310655 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1029 09:10:04.615705  310655 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:10:04.615730  310655 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1029 09:10:04.634038  310655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:10:06.348525  310203 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.742836254s)
	I1029 09:10:06.348602  310203 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.709914825s)
	I1029 09:10:06.349074  310203 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.401000942s)
	I1029 09:10:06.349112  310203 api_server.go:72] duration metric: took 2.960840085s to wait for apiserver process to appear ...
	I1029 09:10:06.349123  310203 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:10:06.349142  310203 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:10:06.350959  310203 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-834228 addons enable metrics-server
	
	I1029 09:10:06.356953  310203 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:10:06.356982  310203 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:10:06.363965  310203 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1029 09:10:03.328508  302556 node_ready.go:57] node "default-k8s-diff-port-017274" has "Ready":"False" status (will retry)
	I1029 09:10:03.829048  302556 node_ready.go:49] node "default-k8s-diff-port-017274" is "Ready"
	I1029 09:10:03.829080  302556 node_ready.go:38] duration metric: took 11.504096752s for node "default-k8s-diff-port-017274" to be "Ready" ...
	I1029 09:10:03.829096  302556 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:10:03.829159  302556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:10:03.845047  302556 api_server.go:72] duration metric: took 11.786548891s to wait for apiserver process to appear ...
	I1029 09:10:03.845119  302556 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:10:03.845149  302556 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1029 09:10:03.852711  302556 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1029 09:10:03.854197  302556 api_server.go:141] control plane version: v1.34.1
	I1029 09:10:03.854240  302556 api_server.go:131] duration metric: took 9.105141ms to wait for apiserver health ...
	I1029 09:10:03.854252  302556 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:10:03.859202  302556 system_pods.go:59] 8 kube-system pods found
	I1029 09:10:03.859243  302556 system_pods.go:61] "coredns-66bc5c9577-qtsxl" [c671126a-10b8-46ff-b868-24fb3c0c8271] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:10:03.859251  302556 system_pods.go:61] "etcd-default-k8s-diff-port-017274" [a2fbc310-b3d1-401a-970e-c4a22db898e5] Running
	I1029 09:10:03.859258  302556 system_pods.go:61] "kindnet-tdtxm" [36fa8db0-2ffe-4766-b136-fc7ef839dfab] Running
	I1029 09:10:03.859265  302556 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-017274" [9614a86d-4fc5-47b3-aa96-a4adfa19424b] Running
	I1029 09:10:03.859271  302556 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-017274" [2287dfc7-76ac-4fbb-b232-c09511cbed19] Running
	I1029 09:10:03.859276  302556 system_pods.go:61] "kube-proxy-82xcl" [7881caf5-4a0e-483d-aa7d-1e777513587f] Running
	I1029 09:10:03.859281  302556 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-017274" [90a03547-43ce-4036-9a92-3f5085fd62d4] Running
	I1029 09:10:03.859292  302556 system_pods.go:61] "storage-provisioner" [a2ec03f2-f2b6-42f9-a758-85de0d658ec3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:10:03.859300  302556 system_pods.go:74] duration metric: took 5.034901ms to wait for pod list to return data ...
	I1029 09:10:03.859310  302556 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:10:03.864120  302556 default_sa.go:45] found service account: "default"
	I1029 09:10:03.864183  302556 default_sa.go:55] duration metric: took 4.86596ms for default service account to be created ...
	I1029 09:10:03.864206  302556 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:10:03.869178  302556 system_pods.go:86] 8 kube-system pods found
	I1029 09:10:03.869623  302556 system_pods.go:89] "coredns-66bc5c9577-qtsxl" [c671126a-10b8-46ff-b868-24fb3c0c8271] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:10:03.869650  302556 system_pods.go:89] "etcd-default-k8s-diff-port-017274" [a2fbc310-b3d1-401a-970e-c4a22db898e5] Running
	I1029 09:10:03.869707  302556 system_pods.go:89] "kindnet-tdtxm" [36fa8db0-2ffe-4766-b136-fc7ef839dfab] Running
	I1029 09:10:03.869731  302556 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-017274" [9614a86d-4fc5-47b3-aa96-a4adfa19424b] Running
	I1029 09:10:03.869751  302556 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-017274" [2287dfc7-76ac-4fbb-b232-c09511cbed19] Running
	I1029 09:10:03.869782  302556 system_pods.go:89] "kube-proxy-82xcl" [7881caf5-4a0e-483d-aa7d-1e777513587f] Running
	I1029 09:10:03.869807  302556 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-017274" [90a03547-43ce-4036-9a92-3f5085fd62d4] Running
	I1029 09:10:03.869827  302556 system_pods.go:89] "storage-provisioner" [a2ec03f2-f2b6-42f9-a758-85de0d658ec3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:10:03.869903  302556 retry.go:31] will retry after 252.988362ms: missing components: kube-dns
	I1029 09:10:04.138854  302556 system_pods.go:86] 8 kube-system pods found
	I1029 09:10:04.138896  302556 system_pods.go:89] "coredns-66bc5c9577-qtsxl" [c671126a-10b8-46ff-b868-24fb3c0c8271] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:10:04.138910  302556 system_pods.go:89] "etcd-default-k8s-diff-port-017274" [a2fbc310-b3d1-401a-970e-c4a22db898e5] Running
	I1029 09:10:04.138918  302556 system_pods.go:89] "kindnet-tdtxm" [36fa8db0-2ffe-4766-b136-fc7ef839dfab] Running
	I1029 09:10:04.138924  302556 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-017274" [9614a86d-4fc5-47b3-aa96-a4adfa19424b] Running
	I1029 09:10:04.138930  302556 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-017274" [2287dfc7-76ac-4fbb-b232-c09511cbed19] Running
	I1029 09:10:04.138935  302556 system_pods.go:89] "kube-proxy-82xcl" [7881caf5-4a0e-483d-aa7d-1e777513587f] Running
	I1029 09:10:04.138940  302556 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-017274" [90a03547-43ce-4036-9a92-3f5085fd62d4] Running
	I1029 09:10:04.138947  302556 system_pods.go:89] "storage-provisioner" [a2ec03f2-f2b6-42f9-a758-85de0d658ec3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:10:04.138966  302556 retry.go:31] will retry after 298.087024ms: missing components: kube-dns
	I1029 09:10:04.442128  302556 system_pods.go:86] 8 kube-system pods found
	I1029 09:10:04.442177  302556 system_pods.go:89] "coredns-66bc5c9577-qtsxl" [c671126a-10b8-46ff-b868-24fb3c0c8271] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:10:04.442188  302556 system_pods.go:89] "etcd-default-k8s-diff-port-017274" [a2fbc310-b3d1-401a-970e-c4a22db898e5] Running
	I1029 09:10:04.442197  302556 system_pods.go:89] "kindnet-tdtxm" [36fa8db0-2ffe-4766-b136-fc7ef839dfab] Running
	I1029 09:10:04.442202  302556 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-017274" [9614a86d-4fc5-47b3-aa96-a4adfa19424b] Running
	I1029 09:10:04.442207  302556 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-017274" [2287dfc7-76ac-4fbb-b232-c09511cbed19] Running
	I1029 09:10:04.442212  302556 system_pods.go:89] "kube-proxy-82xcl" [7881caf5-4a0e-483d-aa7d-1e777513587f] Running
	I1029 09:10:04.442217  302556 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-017274" [90a03547-43ce-4036-9a92-3f5085fd62d4] Running
	I1029 09:10:04.442224  302556 system_pods.go:89] "storage-provisioner" [a2ec03f2-f2b6-42f9-a758-85de0d658ec3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:10:04.442249  302556 retry.go:31] will retry after 351.815158ms: missing components: kube-dns
	I1029 09:10:04.806484  302556 system_pods.go:86] 8 kube-system pods found
	I1029 09:10:04.806540  302556 system_pods.go:89] "coredns-66bc5c9577-qtsxl" [c671126a-10b8-46ff-b868-24fb3c0c8271] Running
	I1029 09:10:04.806549  302556 system_pods.go:89] "etcd-default-k8s-diff-port-017274" [a2fbc310-b3d1-401a-970e-c4a22db898e5] Running
	I1029 09:10:04.806555  302556 system_pods.go:89] "kindnet-tdtxm" [36fa8db0-2ffe-4766-b136-fc7ef839dfab] Running
	I1029 09:10:04.806567  302556 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-017274" [9614a86d-4fc5-47b3-aa96-a4adfa19424b] Running
	I1029 09:10:04.806573  302556 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-017274" [2287dfc7-76ac-4fbb-b232-c09511cbed19] Running
	I1029 09:10:04.806579  302556 system_pods.go:89] "kube-proxy-82xcl" [7881caf5-4a0e-483d-aa7d-1e777513587f] Running
	I1029 09:10:04.806585  302556 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-017274" [90a03547-43ce-4036-9a92-3f5085fd62d4] Running
	I1029 09:10:04.806594  302556 system_pods.go:89] "storage-provisioner" [a2ec03f2-f2b6-42f9-a758-85de0d658ec3] Running
	I1029 09:10:04.806603  302556 system_pods.go:126] duration metric: took 942.37979ms to wait for k8s-apps to be running ...
	I1029 09:10:04.806612  302556 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:10:04.806663  302556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:04.827030  302556 system_svc.go:56] duration metric: took 20.404988ms WaitForService to wait for kubelet
	I1029 09:10:04.827121  302556 kubeadm.go:587] duration metric: took 12.768626996s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:10:04.827161  302556 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:10:04.833165  302556 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 09:10:04.833263  302556 node_conditions.go:123] node cpu capacity is 8
	I1029 09:10:04.833291  302556 node_conditions.go:105] duration metric: took 6.09407ms to run NodePressure ...
	I1029 09:10:04.833335  302556 start.go:242] waiting for startup goroutines ...
	I1029 09:10:04.833371  302556 start.go:247] waiting for cluster config update ...
	I1029 09:10:04.833411  302556 start.go:256] writing updated cluster config ...
	I1029 09:10:04.833813  302556 ssh_runner.go:195] Run: rm -f paused
	I1029 09:10:04.838865  302556 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:10:04.844910  302556 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qtsxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:04.851778  302556 pod_ready.go:94] pod "coredns-66bc5c9577-qtsxl" is "Ready"
	I1029 09:10:04.851831  302556 pod_ready.go:86] duration metric: took 6.897262ms for pod "coredns-66bc5c9577-qtsxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:04.854670  302556 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:04.860251  302556 pod_ready.go:94] pod "etcd-default-k8s-diff-port-017274" is "Ready"
	I1029 09:10:04.860330  302556 pod_ready.go:86] duration metric: took 5.638147ms for pod "etcd-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:04.863279  302556 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:04.869276  302556 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-017274" is "Ready"
	I1029 09:10:04.869299  302556 pod_ready.go:86] duration metric: took 5.953868ms for pod "kube-apiserver-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:04.871723  302556 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:05.244655  302556 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-017274" is "Ready"
	I1029 09:10:05.244689  302556 pod_ready.go:86] duration metric: took 372.939538ms for pod "kube-controller-manager-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:05.445150  302556 pod_ready.go:83] waiting for pod "kube-proxy-82xcl" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:05.843941  302556 pod_ready.go:94] pod "kube-proxy-82xcl" is "Ready"
	I1029 09:10:05.843974  302556 pod_ready.go:86] duration metric: took 398.790462ms for pod "kube-proxy-82xcl" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:06.048276  302556 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:06.443966  302556 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-017274" is "Ready"
	I1029 09:10:06.444003  302556 pod_ready.go:86] duration metric: took 395.698282ms for pod "kube-scheduler-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:06.444018  302556 pod_ready.go:40] duration metric: took 1.605070257s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:10:06.499944  302556 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:10:06.503202  302556 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-017274" cluster and "default" namespace by default
	I1029 09:10:06.210290  310655 node_ready.go:49] node "no-preload-043790" is "Ready"
	I1029 09:10:06.210328  310655 node_ready.go:38] duration metric: took 1.852758447s for node "no-preload-043790" to be "Ready" ...
	I1029 09:10:06.210345  310655 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:10:06.210402  310655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:10:06.851414  310655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.427077938s)
	I1029 09:10:06.851526  310655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.217449546s)
	I1029 09:10:06.851414  310655 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.513757439s)
	I1029 09:10:06.851606  310655 api_server.go:72] duration metric: took 2.746647627s to wait for apiserver process to appear ...
	I1029 09:10:06.851647  310655 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:10:06.851664  310655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1029 09:10:06.853583  310655 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-043790 addons enable metrics-server
	
	I1029 09:10:06.856495  310655 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:10:06.856520  310655 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:10:06.859301  310655 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1029 09:10:05.803606  308587 pod_ready.go:104] pod "coredns-5dd5756b68-v5mr5" is not "Ready", error: <nil>
	W1029 09:10:08.300275  308587 pod_ready.go:104] pod "coredns-5dd5756b68-v5mr5" is not "Ready", error: <nil>
	I1029 09:10:06.365098  310203 addons.go:515] duration metric: took 2.976700464s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1029 09:10:06.849886  310203 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1029 09:10:06.854575  310203 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1029 09:10:06.855758  310203 api_server.go:141] control plane version: v1.34.1
	I1029 09:10:06.855783  310203 api_server.go:131] duration metric: took 506.653293ms to wait for apiserver health ...
	I1029 09:10:06.855795  310203 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:10:06.859238  310203 system_pods.go:59] 8 kube-system pods found
	I1029 09:10:06.859272  310203 system_pods.go:61] "coredns-66bc5c9577-w9vf6" [a9ebd931-6ce6-4d23-b24c-ee0e6037096b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:10:06.859284  310203 system_pods.go:61] "etcd-embed-certs-834228" [391b9720-f295-4145-9fcc-8cc0ac44b0f7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:10:06.859292  310203 system_pods.go:61] "kindnet-dgkfz" [6616e889-1d54-48f4-9239-12fdc19fd542] Running
	I1029 09:10:06.859301  310203 system_pods.go:61] "kube-apiserver-embed-certs-834228" [d440e37e-cc77-433d-a832-3e131e6c328f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:10:06.859312  310203 system_pods.go:61] "kube-controller-manager-embed-certs-834228" [259bab7f-6ff4-4328-bf40-11f84df53260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:10:06.859319  310203 system_pods.go:61] "kube-proxy-bxthb" [9e97c02a-d3b4-4b2a-9ac5-ea6cc572848f] Running
	I1029 09:10:06.859328  310203 system_pods.go:61] "kube-scheduler-embed-certs-834228" [a4347af3-6219-4e22-84ba-708b051389e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:10:06.859332  310203 system_pods.go:61] "storage-provisioner" [cbc8bcae-4373-412e-a597-5e2af9bbabea] Running
	I1029 09:10:06.859339  310203 system_pods.go:74] duration metric: took 3.538284ms to wait for pod list to return data ...
	I1029 09:10:06.859346  310203 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:10:06.861951  310203 default_sa.go:45] found service account: "default"
	I1029 09:10:06.861972  310203 default_sa.go:55] duration metric: took 2.620075ms for default service account to be created ...
	I1029 09:10:06.861983  310203 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:10:06.864597  310203 system_pods.go:86] 8 kube-system pods found
	I1029 09:10:06.864620  310203 system_pods.go:89] "coredns-66bc5c9577-w9vf6" [a9ebd931-6ce6-4d23-b24c-ee0e6037096b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:10:06.864635  310203 system_pods.go:89] "etcd-embed-certs-834228" [391b9720-f295-4145-9fcc-8cc0ac44b0f7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:10:06.864641  310203 system_pods.go:89] "kindnet-dgkfz" [6616e889-1d54-48f4-9239-12fdc19fd542] Running
	I1029 09:10:06.864647  310203 system_pods.go:89] "kube-apiserver-embed-certs-834228" [d440e37e-cc77-433d-a832-3e131e6c328f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:10:06.864653  310203 system_pods.go:89] "kube-controller-manager-embed-certs-834228" [259bab7f-6ff4-4328-bf40-11f84df53260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:10:06.864657  310203 system_pods.go:89] "kube-proxy-bxthb" [9e97c02a-d3b4-4b2a-9ac5-ea6cc572848f] Running
	I1029 09:10:06.864662  310203 system_pods.go:89] "kube-scheduler-embed-certs-834228" [a4347af3-6219-4e22-84ba-708b051389e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:10:06.864666  310203 system_pods.go:89] "storage-provisioner" [cbc8bcae-4373-412e-a597-5e2af9bbabea] Running
	I1029 09:10:06.864673  310203 system_pods.go:126] duration metric: took 2.685122ms to wait for k8s-apps to be running ...
	I1029 09:10:06.864679  310203 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:10:06.864720  310203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:06.878389  310203 system_svc.go:56] duration metric: took 13.698619ms WaitForService to wait for kubelet
	I1029 09:10:06.878422  310203 kubeadm.go:587] duration metric: took 3.490150509s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:10:06.878444  310203 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:10:06.881969  310203 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 09:10:06.882022  310203 node_conditions.go:123] node cpu capacity is 8
	I1029 09:10:06.882039  310203 node_conditions.go:105] duration metric: took 3.589084ms to run NodePressure ...
	I1029 09:10:06.882056  310203 start.go:242] waiting for startup goroutines ...
	I1029 09:10:06.882066  310203 start.go:247] waiting for cluster config update ...
	I1029 09:10:06.882086  310203 start.go:256] writing updated cluster config ...
	I1029 09:10:06.882478  310203 ssh_runner.go:195] Run: rm -f paused
	I1029 09:10:06.886869  310203 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:10:06.890602  310203 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w9vf6" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 09:10:08.896368  310203 pod_ready.go:104] pod "coredns-66bc5c9577-w9vf6" is not "Ready", error: <nil>
	I1029 09:10:06.860500  310655 addons.go:515] duration metric: took 2.755277903s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1029 09:10:07.352187  310655 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1029 09:10:07.356229  310655 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1029 09:10:07.357259  310655 api_server.go:141] control plane version: v1.34.1
	I1029 09:10:07.357288  310655 api_server.go:131] duration metric: took 505.633024ms to wait for apiserver health ...
	I1029 09:10:07.357304  310655 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:10:07.360570  310655 system_pods.go:59] 8 kube-system pods found
	I1029 09:10:07.360605  310655 system_pods.go:61] "coredns-66bc5c9577-bgslp" [8f0fcbc0-6872-42e0-a601-21fc1d777bc3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:10:07.360625  310655 system_pods.go:61] "etcd-no-preload-043790" [8021c438-763d-43d2-a61b-10a533eafb94] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:10:07.360640  310655 system_pods.go:61] "kindnet-dlrgv" [f12f7640-1309-4575-aa29-6f262b956f0a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 09:10:07.360648  310655 system_pods.go:61] "kube-apiserver-no-preload-043790" [2633f749-fbf0-4a24-8fbb-574f6ac7d7a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:10:07.360661  310655 system_pods.go:61] "kube-controller-manager-no-preload-043790" [2810e859-4eda-4452-aa92-849c03b5f453] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:10:07.360672  310655 system_pods.go:61] "kube-proxy-7dc8p" [0ba63a1c-9709-4ebd-8ca2-664740d92a22] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:10:07.360692  310655 system_pods.go:61] "kube-scheduler-no-preload-043790" [ce9f47b3-0716-4567-8724-18d1ebc54ced] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:10:07.360702  310655 system_pods.go:61] "storage-provisioner" [224fa5f2-7b79-4a88-aff2-e3015c0eb63f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:10:07.360709  310655 system_pods.go:74] duration metric: took 3.399687ms to wait for pod list to return data ...
	I1029 09:10:07.360718  310655 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:10:07.362744  310655 default_sa.go:45] found service account: "default"
	I1029 09:10:07.362764  310655 default_sa.go:55] duration metric: took 2.040399ms for default service account to be created ...
	I1029 09:10:07.362774  310655 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:10:07.365347  310655 system_pods.go:86] 8 kube-system pods found
	I1029 09:10:07.365369  310655 system_pods.go:89] "coredns-66bc5c9577-bgslp" [8f0fcbc0-6872-42e0-a601-21fc1d777bc3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:10:07.365377  310655 system_pods.go:89] "etcd-no-preload-043790" [8021c438-763d-43d2-a61b-10a533eafb94] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:10:07.365384  310655 system_pods.go:89] "kindnet-dlrgv" [f12f7640-1309-4575-aa29-6f262b956f0a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 09:10:07.365390  310655 system_pods.go:89] "kube-apiserver-no-preload-043790" [2633f749-fbf0-4a24-8fbb-574f6ac7d7a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:10:07.365396  310655 system_pods.go:89] "kube-controller-manager-no-preload-043790" [2810e859-4eda-4452-aa92-849c03b5f453] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:10:07.365404  310655 system_pods.go:89] "kube-proxy-7dc8p" [0ba63a1c-9709-4ebd-8ca2-664740d92a22] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:10:07.365411  310655 system_pods.go:89] "kube-scheduler-no-preload-043790" [ce9f47b3-0716-4567-8724-18d1ebc54ced] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:10:07.365423  310655 system_pods.go:89] "storage-provisioner" [224fa5f2-7b79-4a88-aff2-e3015c0eb63f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:10:07.365430  310655 system_pods.go:126] duration metric: took 2.650798ms to wait for k8s-apps to be running ...
	I1029 09:10:07.365438  310655 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:10:07.365480  310655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:07.381588  310655 system_svc.go:56] duration metric: took 16.13788ms WaitForService to wait for kubelet
	I1029 09:10:07.381620  310655 kubeadm.go:587] duration metric: took 3.27666191s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:10:07.381645  310655 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:10:07.385153  310655 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 09:10:07.385186  310655 node_conditions.go:123] node cpu capacity is 8
	I1029 09:10:07.385203  310655 node_conditions.go:105] duration metric: took 3.552581ms to run NodePressure ...
	I1029 09:10:07.385218  310655 start.go:242] waiting for startup goroutines ...
	I1029 09:10:07.385228  310655 start.go:247] waiting for cluster config update ...
	I1029 09:10:07.385241  310655 start.go:256] writing updated cluster config ...
	I1029 09:10:07.385497  310655 ssh_runner.go:195] Run: rm -f paused
	I1029 09:10:07.390527  310655 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:10:07.395572  310655 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bgslp" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 09:10:09.401861  310655 pod_ready.go:104] pod "coredns-66bc5c9577-bgslp" is not "Ready", error: <nil>
	W1029 09:10:10.801775  308587 pod_ready.go:104] pod "coredns-5dd5756b68-v5mr5" is not "Ready", error: <nil>
	W1029 09:10:13.300696  308587 pod_ready.go:104] pod "coredns-5dd5756b68-v5mr5" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 29 09:10:04 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:04.008391209Z" level=info msg="Starting container: 74baad07e44bc046a4e6c74c8d4192b85b60d17d9298fb0c73e6df51f881dae2" id=c834f58a-1bad-47f9-9dc8-2b48bf4b76de name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:10:04 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:04.012282054Z" level=info msg="Started container" PID=1843 containerID=74baad07e44bc046a4e6c74c8d4192b85b60d17d9298fb0c73e6df51f881dae2 description=kube-system/coredns-66bc5c9577-qtsxl/coredns id=c834f58a-1bad-47f9-9dc8-2b48bf4b76de name=/runtime.v1.RuntimeService/StartContainer sandboxID=ffd94c08b5865189f4477e2cfea5e27d18d058ad3fe18db3a58f2ed0879de45a
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.038117813Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5e6da170-2633-4e23-9dc7-91a1762a2e91 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.038223425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.042943347Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e5aa6877d25a54ee625bec98dd6b9e4aebeb3d01e77fdd52998350d69d11753a UID:5a6e73ef-c304-441e-9c28-76a4f3babb6e NetNS:/var/run/netns/1a4effa9-28aa-44ae-96bc-5e3a07a70a00 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000540620}] Aliases:map[]}"
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.042973672Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.053665732Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e5aa6877d25a54ee625bec98dd6b9e4aebeb3d01e77fdd52998350d69d11753a UID:5a6e73ef-c304-441e-9c28-76a4f3babb6e NetNS:/var/run/netns/1a4effa9-28aa-44ae-96bc-5e3a07a70a00 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000540620}] Aliases:map[]}"
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.05379464Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.054589316Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.055409723Z" level=info msg="Ran pod sandbox e5aa6877d25a54ee625bec98dd6b9e4aebeb3d01e77fdd52998350d69d11753a with infra container: default/busybox/POD" id=5e6da170-2633-4e23-9dc7-91a1762a2e91 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.056493068Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=238b95f0-7eb2-40c1-947e-3b6d2bd72687 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.056623245Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=238b95f0-7eb2-40c1-947e-3b6d2bd72687 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.056657116Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=238b95f0-7eb2-40c1-947e-3b6d2bd72687 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.057403314Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cbf51b28-247c-49a8-829c-c6826cfac939 name=/runtime.v1.ImageService/PullImage
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.058878671Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.794642913Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=cbf51b28-247c-49a8-829c-c6826cfac939 name=/runtime.v1.ImageService/PullImage
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.79558744Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=259892f3-b514-4687-a9e5-807e1ceda8e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.797394802Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b2b311de-fb76-47a2-87e3-2faa0cdf9fc6 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.801636541Z" level=info msg="Creating container: default/busybox/busybox" id=ac9d98eb-0ddf-43f8-89bf-f6e2c5c63605 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.801801454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.80603489Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.806579976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.832440123Z" level=info msg="Created container 89d73cc56ab28ad59c6d9e991b5c242958598666fef08a18b6266efc466de3af: default/busybox/busybox" id=ac9d98eb-0ddf-43f8-89bf-f6e2c5c63605 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.833218428Z" level=info msg="Starting container: 89d73cc56ab28ad59c6d9e991b5c242958598666fef08a18b6266efc466de3af" id=155cf4bb-bbef-446d-b0fa-fba9d853b2b7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:10:07 default-k8s-diff-port-017274 crio[779]: time="2025-10-29T09:10:07.835208145Z" level=info msg="Started container" PID=1910 containerID=89d73cc56ab28ad59c6d9e991b5c242958598666fef08a18b6266efc466de3af description=default/busybox/busybox id=155cf4bb-bbef-446d-b0fa-fba9d853b2b7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e5aa6877d25a54ee625bec98dd6b9e4aebeb3d01e77fdd52998350d69d11753a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	89d73cc56ab28       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   e5aa6877d25a5       busybox                                                default
	74baad07e44bc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   ffd94c08b5865       coredns-66bc5c9577-qtsxl                               kube-system
	5682475295c36       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   5c446ca72efeb       storage-provisioner                                    kube-system
	11416900181b7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   cc10cd7770a17       kindnet-tdtxm                                          kube-system
	29ea0dfc3fce3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   c5080a04bace4       kube-proxy-82xcl                                       kube-system
	03f528d92c95d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   68827b195f54f       kube-scheduler-default-k8s-diff-port-017274            kube-system
	0d2bfecff12e1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   5e4a6b92c0581       kube-apiserver-default-k8s-diff-port-017274            kube-system
	0c0f5456b0d39       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   7cf8f0d4e1034       kube-controller-manager-default-k8s-diff-port-017274   kube-system
	edc3dc48f0591       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   55063ea24b09f       etcd-default-k8s-diff-port-017274                      kube-system
	
	
	==> coredns [74baad07e44bc046a4e6c74c8d4192b85b60d17d9298fb0c73e6df51f881dae2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48866 - 46702 "HINFO IN 89244087915095104.5419218998399948780. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.08515618s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-017274
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-017274
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=default-k8s-diff-port-017274
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_09_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:09:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-017274
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:10:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:10:03 +0000   Wed, 29 Oct 2025 09:09:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:10:03 +0000   Wed, 29 Oct 2025 09:09:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:10:03 +0000   Wed, 29 Oct 2025 09:09:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:10:03 +0000   Wed, 29 Oct 2025 09:10:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-017274
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                c5ea9dce-72e7-4834-9b46-0ce5130939cc
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-qtsxl                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-default-k8s-diff-port-017274                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-tdtxm                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-017274             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-017274    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-82xcl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-017274             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node default-k8s-diff-port-017274 event: Registered Node default-k8s-diff-port-017274 in Controller
	  Normal  NodeReady                12s                kubelet          Node default-k8s-diff-port-017274 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[  +7.860471] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea d1 bc 55 fa d3 08 06
	[  +0.057230] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[  +7.379065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 86 de 12 5c b8 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[Oct29 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 3e ab f0 10 3c 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 16 ea 00 8c 1d 08 06
	[  +4.650960] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 a3 db 56 0e bd 08 06
	[  +0.000357] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[ +10.158654] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	[Oct29 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 62 44 12 a2 6e 08 06
	[  +0.000472] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	
	
	==> etcd [edc3dc48f0591ae83dc8f2657c9e397c50ca2f220c9dfc49ecb6ec4628f43f14] <==
	{"level":"warn","ts":"2025-10-29T09:09:43.356342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.364730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.371946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.378742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.385409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.392059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.399165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.405461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.412124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.424202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.430645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.437624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.444139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.450217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.456173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.462638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.468751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.475681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.482252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.488695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.495253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.509558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.516386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.522472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:09:43.568901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50472","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:10:16 up 52 min,  0 user,  load average: 6.08, 4.24, 2.59
	Linux default-k8s-diff-port-017274 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [11416900181b79955212d7283091b120787547217b9834d3d5e0c550a577d7c0] <==
	I1029 09:09:52.818837       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:09:52.819123       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1029 09:09:52.819277       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:09:52.819292       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:09:52.819313       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:09:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:09:53.022222       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:09:53.022267       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:09:53.022279       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:09:53.022435       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:09:53.423068       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:09:53.423103       1 metrics.go:72] Registering metrics
	I1029 09:09:53.423163       1 controller.go:711] "Syncing nftables rules"
	I1029 09:10:03.022149       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1029 09:10:03.022228       1 main.go:301] handling current node
	I1029 09:10:13.024891       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1029 09:10:13.024959       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0d2bfecff12e13921c8c2086b758f576fc3308b5a9f916c7d00e02d8f84efd09] <==
	E1029 09:09:44.083717       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1029 09:09:44.130805       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:09:44.134574       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:09:44.134687       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1029 09:09:44.141114       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:09:44.141358       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:09:44.249288       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:09:44.933836       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1029 09:09:44.937584       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1029 09:09:44.937602       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:09:45.427896       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:09:45.464296       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:09:45.538759       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1029 09:09:45.544874       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1029 09:09:45.546259       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:09:45.551303       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:09:45.991772       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:09:46.564898       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:09:46.575135       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1029 09:09:46.582610       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 09:09:51.146777       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:09:51.150811       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:09:51.344219       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:09:51.693840       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1029 09:10:13.869268       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:37136: use of closed network connection
	
	
	==> kube-controller-manager [0c0f5456b0d395e38cb1f3dbe8a98d379f7f3b1c47d12baf8b640e0b0315c0a2] <==
	I1029 09:09:50.955048       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-017274" podCIDRs=["10.244.0.0/24"]
	I1029 09:09:50.955158       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 09:09:50.962174       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:09:50.968473       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 09:09:50.973785       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1029 09:09:50.990928       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:09:50.990936       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:09:50.990946       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:09:50.990953       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:09:50.991008       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:09:50.991069       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1029 09:09:50.991069       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1029 09:09:50.991188       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:09:50.991388       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:09:50.991448       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 09:09:50.991618       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:09:50.991655       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1029 09:09:50.992673       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 09:09:50.992685       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1029 09:09:50.992712       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:09:50.992730       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 09:09:50.997219       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:09:51.010431       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:09:51.010473       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:10:05.941327       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [29ea0dfc3fce38107b4495dd75ca622798181cef37c947073b4046c3711de3b4] <==
	I1029 09:09:52.708077       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:09:52.787932       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:09:52.888267       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:09:52.888308       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1029 09:09:52.888400       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:09:52.907557       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:09:52.907614       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:09:52.913049       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:09:52.913427       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:09:52.913463       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:09:52.914907       1 config.go:200] "Starting service config controller"
	I1029 09:09:52.914948       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:09:52.915045       1 config.go:309] "Starting node config controller"
	I1029 09:09:52.915053       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:09:52.915059       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:09:52.915073       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:09:52.915082       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:09:52.915096       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:09:52.915110       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:09:53.015209       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 09:09:53.015220       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:09:53.015256       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [03f528d92c95d37e52c7ef215f1ece2b9b13688e45207cb6741116c33c77a72b] <==
	E1029 09:09:44.005750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 09:09:44.005785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 09:09:44.005791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 09:09:44.005785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:09:44.005408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 09:09:44.005857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 09:09:44.005931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 09:09:44.005938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 09:09:44.006173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 09:09:44.006195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 09:09:44.006282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:09:44.932866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 09:09:44.961041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1029 09:09:44.962932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:09:44.994165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 09:09:45.021751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 09:09:45.031057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:09:45.032004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 09:09:45.101692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 09:09:45.158073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 09:09:45.226013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 09:09:45.238503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1029 09:09:45.238626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 09:09:45.255108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1029 09:09:47.503814       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:09:51 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:09:51.812587    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7881caf5-4a0e-483d-aa7d-1e777513587f-lib-modules\") pod \"kube-proxy-82xcl\" (UID: \"7881caf5-4a0e-483d-aa7d-1e777513587f\") " pod="kube-system/kube-proxy-82xcl"
	Oct 29 09:09:51 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:09:51.812641    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-865hm\" (UniqueName: \"kubernetes.io/projected/7881caf5-4a0e-483d-aa7d-1e777513587f-kube-api-access-865hm\") pod \"kube-proxy-82xcl\" (UID: \"7881caf5-4a0e-483d-aa7d-1e777513587f\") " pod="kube-system/kube-proxy-82xcl"
	Oct 29 09:09:51 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:09:51.812678    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36fa8db0-2ffe-4766-b136-fc7ef839dfab-xtables-lock\") pod \"kindnet-tdtxm\" (UID: \"36fa8db0-2ffe-4766-b136-fc7ef839dfab\") " pod="kube-system/kindnet-tdtxm"
	Oct 29 09:09:51 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:09:51.812710    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7881caf5-4a0e-483d-aa7d-1e777513587f-kube-proxy\") pod \"kube-proxy-82xcl\" (UID: \"7881caf5-4a0e-483d-aa7d-1e777513587f\") " pod="kube-system/kube-proxy-82xcl"
	Oct 29 09:09:51 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:09:51.812730    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/36fa8db0-2ffe-4766-b136-fc7ef839dfab-cni-cfg\") pod \"kindnet-tdtxm\" (UID: \"36fa8db0-2ffe-4766-b136-fc7ef839dfab\") " pod="kube-system/kindnet-tdtxm"
	Oct 29 09:09:51 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:09:51.812751    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36fa8db0-2ffe-4766-b136-fc7ef839dfab-lib-modules\") pod \"kindnet-tdtxm\" (UID: \"36fa8db0-2ffe-4766-b136-fc7ef839dfab\") " pod="kube-system/kindnet-tdtxm"
	Oct 29 09:09:51 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:09:51.812773    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whdpc\" (UniqueName: \"kubernetes.io/projected/36fa8db0-2ffe-4766-b136-fc7ef839dfab-kube-api-access-whdpc\") pod \"kindnet-tdtxm\" (UID: \"36fa8db0-2ffe-4766-b136-fc7ef839dfab\") " pod="kube-system/kindnet-tdtxm"
	Oct 29 09:09:51 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:09:51.812864    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7881caf5-4a0e-483d-aa7d-1e777513587f-xtables-lock\") pod \"kube-proxy-82xcl\" (UID: \"7881caf5-4a0e-483d-aa7d-1e777513587f\") " pod="kube-system/kube-proxy-82xcl"
	Oct 29 09:09:51 default-k8s-diff-port-017274 kubelet[1308]: E1029 09:09:51.919606    1308 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 29 09:09:51 default-k8s-diff-port-017274 kubelet[1308]: E1029 09:09:51.919646    1308 projected.go:196] Error preparing data for projected volume kube-api-access-865hm for pod kube-system/kube-proxy-82xcl: configmap "kube-root-ca.crt" not found
	Oct 29 09:09:51 default-k8s-diff-port-017274 kubelet[1308]: E1029 09:09:51.919700    1308 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 29 09:09:51 default-k8s-diff-port-017274 kubelet[1308]: E1029 09:09:51.919733    1308 projected.go:196] Error preparing data for projected volume kube-api-access-whdpc for pod kube-system/kindnet-tdtxm: configmap "kube-root-ca.crt" not found
	Oct 29 09:09:51 default-k8s-diff-port-017274 kubelet[1308]: E1029 09:09:51.919762    1308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7881caf5-4a0e-483d-aa7d-1e777513587f-kube-api-access-865hm podName:7881caf5-4a0e-483d-aa7d-1e777513587f nodeName:}" failed. No retries permitted until 2025-10-29 09:09:52.419712531 +0000 UTC m=+6.109234326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-865hm" (UniqueName: "kubernetes.io/projected/7881caf5-4a0e-483d-aa7d-1e777513587f-kube-api-access-865hm") pod "kube-proxy-82xcl" (UID: "7881caf5-4a0e-483d-aa7d-1e777513587f") : configmap "kube-root-ca.crt" not found
	Oct 29 09:09:51 default-k8s-diff-port-017274 kubelet[1308]: E1029 09:09:51.919796    1308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36fa8db0-2ffe-4766-b136-fc7ef839dfab-kube-api-access-whdpc podName:36fa8db0-2ffe-4766-b136-fc7ef839dfab nodeName:}" failed. No retries permitted until 2025-10-29 09:09:52.419784666 +0000 UTC m=+6.109306433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-whdpc" (UniqueName: "kubernetes.io/projected/36fa8db0-2ffe-4766-b136-fc7ef839dfab-kube-api-access-whdpc") pod "kindnet-tdtxm" (UID: "36fa8db0-2ffe-4766-b136-fc7ef839dfab") : configmap "kube-root-ca.crt" not found
	Oct 29 09:09:53 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:09:53.437192    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tdtxm" podStartSLOduration=2.437175287 podStartE2EDuration="2.437175287s" podCreationTimestamp="2025-10-29 09:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:09:53.436942227 +0000 UTC m=+7.126464030" watchObservedRunningTime="2025-10-29 09:09:53.437175287 +0000 UTC m=+7.126697073"
	Oct 29 09:09:57 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:09:57.073148    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-82xcl" podStartSLOduration=6.073124997 podStartE2EDuration="6.073124997s" podCreationTimestamp="2025-10-29 09:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:09:53.448686575 +0000 UTC m=+7.138208343" watchObservedRunningTime="2025-10-29 09:09:57.073124997 +0000 UTC m=+10.762646783"
	Oct 29 09:10:03 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:10:03.521749    1308 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 29 09:10:03 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:10:03.605145    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a2ec03f2-f2b6-42f9-a758-85de0d658ec3-tmp\") pod \"storage-provisioner\" (UID: \"a2ec03f2-f2b6-42f9-a758-85de0d658ec3\") " pod="kube-system/storage-provisioner"
	Oct 29 09:10:03 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:10:03.605367    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c671126a-10b8-46ff-b868-24fb3c0c8271-config-volume\") pod \"coredns-66bc5c9577-qtsxl\" (UID: \"c671126a-10b8-46ff-b868-24fb3c0c8271\") " pod="kube-system/coredns-66bc5c9577-qtsxl"
	Oct 29 09:10:03 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:10:03.605428    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc8h8\" (UniqueName: \"kubernetes.io/projected/a2ec03f2-f2b6-42f9-a758-85de0d658ec3-kube-api-access-tc8h8\") pod \"storage-provisioner\" (UID: \"a2ec03f2-f2b6-42f9-a758-85de0d658ec3\") " pod="kube-system/storage-provisioner"
	Oct 29 09:10:03 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:10:03.605484    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9dm9\" (UniqueName: \"kubernetes.io/projected/c671126a-10b8-46ff-b868-24fb3c0c8271-kube-api-access-h9dm9\") pod \"coredns-66bc5c9577-qtsxl\" (UID: \"c671126a-10b8-46ff-b868-24fb3c0c8271\") " pod="kube-system/coredns-66bc5c9577-qtsxl"
	Oct 29 09:10:04 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:10:04.490218    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.490196821 podStartE2EDuration="12.490196821s" podCreationTimestamp="2025-10-29 09:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:10:04.475707653 +0000 UTC m=+18.165229442" watchObservedRunningTime="2025-10-29 09:10:04.490196821 +0000 UTC m=+18.179718608"
	Oct 29 09:10:06 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:10:06.727848    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qtsxl" podStartSLOduration=14.727819042 podStartE2EDuration="14.727819042s" podCreationTimestamp="2025-10-29 09:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:10:04.492459931 +0000 UTC m=+18.181981701" watchObservedRunningTime="2025-10-29 09:10:06.727819042 +0000 UTC m=+20.417340830"
	Oct 29 09:10:06 default-k8s-diff-port-017274 kubelet[1308]: I1029 09:10:06.830261    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjpzs\" (UniqueName: \"kubernetes.io/projected/5a6e73ef-c304-441e-9c28-76a4f3babb6e-kube-api-access-jjpzs\") pod \"busybox\" (UID: \"5a6e73ef-c304-441e-9c28-76a4f3babb6e\") " pod="default/busybox"
	Oct 29 09:10:13 default-k8s-diff-port-017274 kubelet[1308]: E1029 09:10:13.870561    1308 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53932->127.0.0.1:43571: write tcp 127.0.0.1:53932->127.0.0.1:43571: write: broken pipe
	
	
	==> storage-provisioner [5682475295c365bbd4b41d9e959211f7cfff682fc43a801c4fd15b5f53f418af] <==
	I1029 09:10:03.976780       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:10:03.993602       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:10:03.993738       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:10:04.003699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:04.020255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:10:04.027081       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:10:04.028209       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-017274_e0abfe5f-2e30-4a2c-af98-fe1f305fe464!
	I1029 09:10:04.027514       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc8dd318-9670-4d4d-99bd-9ed78324108f", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-017274_e0abfe5f-2e30-4a2c-af98-fe1f305fe464 became leader
	W1029 09:10:04.035361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:04.055817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:10:04.129164       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-017274_e0abfe5f-2e30-4a2c-af98-fe1f305fe464!
	W1029 09:10:06.060208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:06.072433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:08.076122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:08.082160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:10.085800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:10.098509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:12.102411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:12.108048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:14.114496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:14.120746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:16.127056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:16.134381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-017274 -n default-k8s-diff-port-017274
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-017274 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-096492 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-096492 --alsologtostderr -v=1: exit status 80 (2.435074933s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-096492 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:10:49.612743  320163 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:10:49.613059  320163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:49.613070  320163 out.go:374] Setting ErrFile to fd 2...
	I1029 09:10:49.613076  320163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:49.613341  320163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:10:49.613665  320163 out.go:368] Setting JSON to false
	I1029 09:10:49.613759  320163 mustload.go:66] Loading cluster: old-k8s-version-096492
	I1029 09:10:49.614256  320163 config.go:182] Loaded profile config "old-k8s-version-096492": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1029 09:10:49.614899  320163 cli_runner.go:164] Run: docker container inspect old-k8s-version-096492 --format={{.State.Status}}
	I1029 09:10:49.639856  320163 host.go:66] Checking if "old-k8s-version-096492" exists ...
	I1029 09:10:49.640304  320163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:49.721978  320163 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-29 09:10:49.708546785 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:49.722656  320163 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-096492 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1029 09:10:49.725905  320163 out.go:179] * Pausing node old-k8s-version-096492 ... 
	I1029 09:10:49.727241  320163 host.go:66] Checking if "old-k8s-version-096492" exists ...
	I1029 09:10:49.727517  320163 ssh_runner.go:195] Run: systemctl --version
	I1029 09:10:49.727553  320163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-096492
	I1029 09:10:49.749506  320163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/old-k8s-version-096492/id_rsa Username:docker}
	I1029 09:10:49.856031  320163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:49.871583  320163 pause.go:52] kubelet running: true
	I1029 09:10:49.871649  320163 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:10:50.070098  320163 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:10:50.070201  320163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:10:50.168523  320163 cri.go:89] found id: "760a87238b2f69d30a492a8425720859425081870c4d1699bd2cd63e614eb1ac"
	I1029 09:10:50.168546  320163 cri.go:89] found id: "c487e8fe869e3ca2313a2d3948922a35774499c95c5df3089ea171e2f4b4e5e9"
	I1029 09:10:50.168551  320163 cri.go:89] found id: "4d9dad20289cc57f254242baa3e4cb4a7f518fd33fd2534285dec97a5d521b07"
	I1029 09:10:50.168556  320163 cri.go:89] found id: "031a423a3b88f15d8793b324f402b6d66cad2fce5d425423baf97566df02d968"
	I1029 09:10:50.168560  320163 cri.go:89] found id: "737af4626b5ae3892122d80b6d43829693d67087a126e1edab5cb80129fc0b89"
	I1029 09:10:50.168564  320163 cri.go:89] found id: "eb75fa40098e331528b7e94c2e2a71c766423c56a220a24eaaa69f66efdce4b6"
	I1029 09:10:50.168568  320163 cri.go:89] found id: "d92dd056da0fc02c82efa03b27cf291df638b891640d4514b8dea24f11e44842"
	I1029 09:10:50.168572  320163 cri.go:89] found id: "f75d2e46364d0954bc8168a45bbf13f9854e2c28802b489937d6d807e197c25c"
	I1029 09:10:50.168576  320163 cri.go:89] found id: "3c2ce552cdf8c320285c2bb9f072826ac4a862ddc09798713d1491913854ccfa"
	I1029 09:10:50.168583  320163 cri.go:89] found id: "30070075bf0d8949d74b69220a19c409ef57a25928f6cd1dc21dc144031e1f3a"
	I1029 09:10:50.168587  320163 cri.go:89] found id: "bcd2e5c5941b1ea800a416d3424313ba461f9e10fec33ea99dbad02c9f819245"
	I1029 09:10:50.168591  320163 cri.go:89] found id: ""
	I1029 09:10:50.168639  320163 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:10:50.185934  320163 retry.go:31] will retry after 201.048779ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:50Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:10:50.387482  320163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:50.405430  320163 pause.go:52] kubelet running: false
	I1029 09:10:50.405494  320163 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:10:50.623279  320163 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:10:50.623367  320163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:10:50.711729  320163 cri.go:89] found id: "760a87238b2f69d30a492a8425720859425081870c4d1699bd2cd63e614eb1ac"
	I1029 09:10:50.711760  320163 cri.go:89] found id: "c487e8fe869e3ca2313a2d3948922a35774499c95c5df3089ea171e2f4b4e5e9"
	I1029 09:10:50.711767  320163 cri.go:89] found id: "4d9dad20289cc57f254242baa3e4cb4a7f518fd33fd2534285dec97a5d521b07"
	I1029 09:10:50.711772  320163 cri.go:89] found id: "031a423a3b88f15d8793b324f402b6d66cad2fce5d425423baf97566df02d968"
	I1029 09:10:50.711776  320163 cri.go:89] found id: "737af4626b5ae3892122d80b6d43829693d67087a126e1edab5cb80129fc0b89"
	I1029 09:10:50.711781  320163 cri.go:89] found id: "eb75fa40098e331528b7e94c2e2a71c766423c56a220a24eaaa69f66efdce4b6"
	I1029 09:10:50.711785  320163 cri.go:89] found id: "d92dd056da0fc02c82efa03b27cf291df638b891640d4514b8dea24f11e44842"
	I1029 09:10:50.711790  320163 cri.go:89] found id: "f75d2e46364d0954bc8168a45bbf13f9854e2c28802b489937d6d807e197c25c"
	I1029 09:10:50.711795  320163 cri.go:89] found id: "3c2ce552cdf8c320285c2bb9f072826ac4a862ddc09798713d1491913854ccfa"
	I1029 09:10:50.711812  320163 cri.go:89] found id: "30070075bf0d8949d74b69220a19c409ef57a25928f6cd1dc21dc144031e1f3a"
	I1029 09:10:50.711823  320163 cri.go:89] found id: "bcd2e5c5941b1ea800a416d3424313ba461f9e10fec33ea99dbad02c9f819245"
	I1029 09:10:50.711826  320163 cri.go:89] found id: ""
	I1029 09:10:50.711872  320163 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:10:50.728194  320163 retry.go:31] will retry after 318.822915ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:50Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:10:51.047839  320163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:51.062671  320163 pause.go:52] kubelet running: false
	I1029 09:10:51.062740  320163 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:10:51.261372  320163 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:10:51.261474  320163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:10:51.363938  320163 cri.go:89] found id: "760a87238b2f69d30a492a8425720859425081870c4d1699bd2cd63e614eb1ac"
	I1029 09:10:51.363969  320163 cri.go:89] found id: "c487e8fe869e3ca2313a2d3948922a35774499c95c5df3089ea171e2f4b4e5e9"
	I1029 09:10:51.363979  320163 cri.go:89] found id: "4d9dad20289cc57f254242baa3e4cb4a7f518fd33fd2534285dec97a5d521b07"
	I1029 09:10:51.363987  320163 cri.go:89] found id: "031a423a3b88f15d8793b324f402b6d66cad2fce5d425423baf97566df02d968"
	I1029 09:10:51.364046  320163 cri.go:89] found id: "737af4626b5ae3892122d80b6d43829693d67087a126e1edab5cb80129fc0b89"
	I1029 09:10:51.364053  320163 cri.go:89] found id: "eb75fa40098e331528b7e94c2e2a71c766423c56a220a24eaaa69f66efdce4b6"
	I1029 09:10:51.364058  320163 cri.go:89] found id: "d92dd056da0fc02c82efa03b27cf291df638b891640d4514b8dea24f11e44842"
	I1029 09:10:51.364064  320163 cri.go:89] found id: "f75d2e46364d0954bc8168a45bbf13f9854e2c28802b489937d6d807e197c25c"
	I1029 09:10:51.364082  320163 cri.go:89] found id: "3c2ce552cdf8c320285c2bb9f072826ac4a862ddc09798713d1491913854ccfa"
	I1029 09:10:51.364098  320163 cri.go:89] found id: "30070075bf0d8949d74b69220a19c409ef57a25928f6cd1dc21dc144031e1f3a"
	I1029 09:10:51.364108  320163 cri.go:89] found id: "bcd2e5c5941b1ea800a416d3424313ba461f9e10fec33ea99dbad02c9f819245"
	I1029 09:10:51.364119  320163 cri.go:89] found id: ""
	I1029 09:10:51.364180  320163 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:10:51.381629  320163 retry.go:31] will retry after 281.621721ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:51Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:10:51.664317  320163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:51.682277  320163 pause.go:52] kubelet running: false
	I1029 09:10:51.682353  320163 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:10:51.863469  320163 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:10:51.863642  320163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:10:51.943041  320163 cri.go:89] found id: "760a87238b2f69d30a492a8425720859425081870c4d1699bd2cd63e614eb1ac"
	I1029 09:10:51.943070  320163 cri.go:89] found id: "c487e8fe869e3ca2313a2d3948922a35774499c95c5df3089ea171e2f4b4e5e9"
	I1029 09:10:51.943077  320163 cri.go:89] found id: "4d9dad20289cc57f254242baa3e4cb4a7f518fd33fd2534285dec97a5d521b07"
	I1029 09:10:51.943082  320163 cri.go:89] found id: "031a423a3b88f15d8793b324f402b6d66cad2fce5d425423baf97566df02d968"
	I1029 09:10:51.943086  320163 cri.go:89] found id: "737af4626b5ae3892122d80b6d43829693d67087a126e1edab5cb80129fc0b89"
	I1029 09:10:51.943091  320163 cri.go:89] found id: "eb75fa40098e331528b7e94c2e2a71c766423c56a220a24eaaa69f66efdce4b6"
	I1029 09:10:51.943095  320163 cri.go:89] found id: "d92dd056da0fc02c82efa03b27cf291df638b891640d4514b8dea24f11e44842"
	I1029 09:10:51.943099  320163 cri.go:89] found id: "f75d2e46364d0954bc8168a45bbf13f9854e2c28802b489937d6d807e197c25c"
	I1029 09:10:51.943103  320163 cri.go:89] found id: "3c2ce552cdf8c320285c2bb9f072826ac4a862ddc09798713d1491913854ccfa"
	I1029 09:10:51.943110  320163 cri.go:89] found id: "30070075bf0d8949d74b69220a19c409ef57a25928f6cd1dc21dc144031e1f3a"
	I1029 09:10:51.943114  320163 cri.go:89] found id: "bcd2e5c5941b1ea800a416d3424313ba461f9e10fec33ea99dbad02c9f819245"
	I1029 09:10:51.943117  320163 cri.go:89] found id: ""
	I1029 09:10:51.943165  320163 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:10:51.959349  320163 out.go:203] 
	W1029 09:10:51.960767  320163 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 09:10:51.960795  320163 out.go:285] * 
	* 
	W1029 09:10:51.965824  320163 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 09:10:51.967405  320163 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-096492 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-096492
helpers_test.go:243: (dbg) docker inspect old-k8s-version-096492:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487",
	        "Created": "2025-10-29T09:08:32.774738315Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308800,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:09:49.690951743Z",
	            "FinishedAt": "2025-10-29T09:09:48.794323899Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487/hostname",
	        "HostsPath": "/var/lib/docker/containers/949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487/hosts",
	        "LogPath": "/var/lib/docker/containers/949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487/949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487-json.log",
	        "Name": "/old-k8s-version-096492",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-096492:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-096492",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487",
	                "LowerDir": "/var/lib/docker/overlay2/3dd617d7720a614d5c6d58f2524fa03b6bedc6f5d6a5c3f937ac49410148bfab-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3dd617d7720a614d5c6d58f2524fa03b6bedc6f5d6a5c3f937ac49410148bfab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3dd617d7720a614d5c6d58f2524fa03b6bedc6f5d6a5c3f937ac49410148bfab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3dd617d7720a614d5c6d58f2524fa03b6bedc6f5d6a5c3f937ac49410148bfab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-096492",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-096492/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-096492",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-096492",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-096492",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c31c5e5371a49f25886b0c91a045e5f9ce816f17397531870382138ae048edb7",
	            "SandboxKey": "/var/run/docker/netns/c31c5e5371a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-096492": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:e9:9a:86:25:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e1d4705eea8799ddd63b1a9cbeb0ede40231eb0a1209d909b2eae8f7a7d7c543",
	                    "EndpointID": "3ea85180d968c45939a341f777a1713cd2efbb13655e27f51fef10bb487f0364",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-096492",
	                        "949e662a4724"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096492 -n old-k8s-version-096492
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096492 -n old-k8s-version-096492: exit status 2 (369.647106ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-096492 logs -n 25
E1029 09:10:53.121428    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/kindnet-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-096492 logs -n 25: (1.163426917s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-240549 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo crio config                                                                                                                                                                                                             │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ delete  │ -p bridge-240549                                                                                                                                                                                                                              │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ delete  │ -p disable-driver-mounts-318335                                                                                                                                                                                                               │ disable-driver-mounts-318335 │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-096492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p old-k8s-version-096492 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ addons  │ enable metrics-server -p embed-certs-834228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-043790 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p embed-certs-834228 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ stop    │ -p no-preload-043790 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-096492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p old-k8s-version-096492 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p embed-certs-834228 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p embed-certs-834228 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p no-preload-043790 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p no-preload-043790 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-017274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-017274 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-017274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ image   │ old-k8s-version-096492 image list --format=json                                                                                                                                                                                               │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p old-k8s-version-096492 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:10:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:10:34.314162  317625 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:10:34.314402  317625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:34.314410  317625 out.go:374] Setting ErrFile to fd 2...
	I1029 09:10:34.314414  317625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:34.314634  317625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:10:34.315081  317625 out.go:368] Setting JSON to false
	I1029 09:10:34.316495  317625 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3182,"bootTime":1761725852,"procs":380,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:10:34.316584  317625 start.go:143] virtualization: kvm guest
	I1029 09:10:34.318767  317625 out.go:179] * [default-k8s-diff-port-017274] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:10:34.320082  317625 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:10:34.320074  317625 notify.go:221] Checking for updates...
	I1029 09:10:34.322494  317625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:10:34.324124  317625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:34.325558  317625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:10:34.326930  317625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:10:34.328350  317625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:10:34.330256  317625 config.go:182] Loaded profile config "default-k8s-diff-port-017274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:34.330944  317625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:10:34.357479  317625 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:10:34.357608  317625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:34.422564  317625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-29 09:10:34.411959992 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:34.422671  317625 docker.go:319] overlay module found
	I1029 09:10:34.425110  317625 out.go:179] * Using the docker driver based on existing profile
	W1029 09:10:29.799925  308587 pod_ready.go:104] pod "coredns-5dd5756b68-v5mr5" is not "Ready", error: <nil>
	W1029 09:10:31.800684  308587 pod_ready.go:104] pod "coredns-5dd5756b68-v5mr5" is not "Ready", error: <nil>
	W1029 09:10:33.800746  308587 pod_ready.go:104] pod "coredns-5dd5756b68-v5mr5" is not "Ready", error: <nil>
	I1029 09:10:34.426363  317625 start.go:309] selected driver: docker
	I1029 09:10:34.426404  317625 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-017274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:10:34.426495  317625 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:10:34.427114  317625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:34.488245  317625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-29 09:10:34.476753433 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:34.488557  317625 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:10:34.488588  317625 cni.go:84] Creating CNI manager for ""
	I1029 09:10:34.488640  317625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:10:34.488711  317625 start.go:353] cluster config:
	{Name:default-k8s-diff-port-017274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:10:34.490469  317625 out.go:179] * Starting "default-k8s-diff-port-017274" primary control-plane node in "default-k8s-diff-port-017274" cluster
	I1029 09:10:34.491673  317625 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:10:34.492880  317625 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:10:34.494011  317625 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:10:34.494059  317625 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:10:34.494079  317625 cache.go:59] Caching tarball of preloaded images
	I1029 09:10:34.494112  317625 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:10:34.494176  317625 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:10:34.494189  317625 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:10:34.494299  317625 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/config.json ...
	I1029 09:10:34.516205  317625 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:10:34.516231  317625 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:10:34.516249  317625 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:10:34.516279  317625 start.go:360] acquireMachinesLock for default-k8s-diff-port-017274: {Name:mkec68307c2ffe0cd4f9e8fcf3c8e2dc4c6d4bfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:10:34.516374  317625 start.go:364] duration metric: took 69.184µs to acquireMachinesLock for "default-k8s-diff-port-017274"
	I1029 09:10:34.516399  317625 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:10:34.516408  317625 fix.go:54] fixHost starting: 
	I1029 09:10:34.516710  317625 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Status}}
	I1029 09:10:34.536101  317625 fix.go:112] recreateIfNeeded on default-k8s-diff-port-017274: state=Stopped err=<nil>
	W1029 09:10:34.536145  317625 fix.go:138] unexpected machine state, will restart: <nil>
	W1029 09:10:32.897121  310203 pod_ready.go:104] pod "coredns-66bc5c9577-w9vf6" is not "Ready", error: <nil>
	W1029 09:10:34.897709  310203 pod_ready.go:104] pod "coredns-66bc5c9577-w9vf6" is not "Ready", error: <nil>
	W1029 09:10:33.402105  310655 pod_ready.go:104] pod "coredns-66bc5c9577-bgslp" is not "Ready", error: <nil>
	W1029 09:10:35.901778  310655 pod_ready.go:104] pod "coredns-66bc5c9577-bgslp" is not "Ready", error: <nil>
	I1029 09:10:36.302413  308587 pod_ready.go:94] pod "coredns-5dd5756b68-v5mr5" is "Ready"
	I1029 09:10:36.302442  308587 pod_ready.go:86] duration metric: took 34.508479031s for pod "coredns-5dd5756b68-v5mr5" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:36.305786  308587 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-096492" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:36.310818  308587 pod_ready.go:94] pod "etcd-old-k8s-version-096492" is "Ready"
	I1029 09:10:36.310845  308587 pod_ready.go:86] duration metric: took 5.032342ms for pod "etcd-old-k8s-version-096492" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:36.313610  308587 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-096492" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:36.318204  308587 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-096492" is "Ready"
	I1029 09:10:36.318230  308587 pod_ready.go:86] duration metric: took 4.597684ms for pod "kube-apiserver-old-k8s-version-096492" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:36.321349  308587 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-096492" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:36.497957  308587 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-096492" is "Ready"
	I1029 09:10:36.497981  308587 pod_ready.go:86] duration metric: took 176.608582ms for pod "kube-controller-manager-old-k8s-version-096492" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:36.698657  308587 pod_ready.go:83] waiting for pod "kube-proxy-8kpqf" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:37.097318  308587 pod_ready.go:94] pod "kube-proxy-8kpqf" is "Ready"
	I1029 09:10:37.097343  308587 pod_ready.go:86] duration metric: took 398.661369ms for pod "kube-proxy-8kpqf" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:37.298274  308587 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-096492" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:37.697624  308587 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-096492" is "Ready"
	I1029 09:10:37.697650  308587 pod_ready.go:86] duration metric: took 399.348612ms for pod "kube-scheduler-old-k8s-version-096492" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:37.697661  308587 pod_ready.go:40] duration metric: took 35.908433904s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:10:37.743274  308587 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1029 09:10:37.744939  308587 out.go:203] 
	W1029 09:10:37.746335  308587 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1029 09:10:37.747520  308587 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1029 09:10:37.748896  308587 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-096492" cluster and "default" namespace by default
	I1029 09:10:34.538054  317625 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-017274" ...
	I1029 09:10:34.538151  317625 cli_runner.go:164] Run: docker start default-k8s-diff-port-017274
	I1029 09:10:34.805129  317625 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Status}}
	I1029 09:10:34.824411  317625 kic.go:430] container "default-k8s-diff-port-017274" state is running.
	I1029 09:10:34.824754  317625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-017274
	I1029 09:10:34.844676  317625 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/config.json ...
	I1029 09:10:34.845019  317625 machine.go:94] provisionDockerMachine start ...
	I1029 09:10:34.845114  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:34.865023  317625 main.go:143] libmachine: Using SSH client type: native
	I1029 09:10:34.865270  317625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1029 09:10:34.865283  317625 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:10:34.865957  317625 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45588->127.0.0.1:33123: read: connection reset by peer
	I1029 09:10:38.011366  317625 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-017274
	
	I1029 09:10:38.011394  317625 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-017274"
	I1029 09:10:38.011458  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:38.033345  317625 main.go:143] libmachine: Using SSH client type: native
	I1029 09:10:38.033651  317625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1029 09:10:38.033690  317625 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-017274 && echo "default-k8s-diff-port-017274" | sudo tee /etc/hostname
	I1029 09:10:38.194487  317625 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-017274
	
	I1029 09:10:38.194582  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:38.214354  317625 main.go:143] libmachine: Using SSH client type: native
	I1029 09:10:38.214605  317625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1029 09:10:38.214636  317625 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-017274' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-017274/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-017274' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:10:38.362074  317625 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:10:38.362104  317625 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-3727/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-3727/.minikube}
	I1029 09:10:38.362154  317625 ubuntu.go:190] setting up certificates
	I1029 09:10:38.362168  317625 provision.go:84] configureAuth start
	I1029 09:10:38.362240  317625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-017274
	I1029 09:10:38.380518  317625 provision.go:143] copyHostCerts
	I1029 09:10:38.380587  317625 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem, removing ...
	I1029 09:10:38.380602  317625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem
	I1029 09:10:38.380681  317625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem (1078 bytes)
	I1029 09:10:38.380829  317625 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem, removing ...
	I1029 09:10:38.380845  317625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem
	I1029 09:10:38.380891  317625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem (1123 bytes)
	I1029 09:10:38.380976  317625 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem, removing ...
	I1029 09:10:38.380987  317625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem
	I1029 09:10:38.381054  317625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem (1679 bytes)
	I1029 09:10:38.381120  317625 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-017274 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-017274 localhost minikube]
	I1029 09:10:38.466350  317625 provision.go:177] copyRemoteCerts
	I1029 09:10:38.466416  317625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:10:38.466452  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:38.487559  317625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:10:38.590735  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1029 09:10:38.609285  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1029 09:10:38.628229  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:10:38.648008  317625 provision.go:87] duration metric: took 285.801018ms to configureAuth
	I1029 09:10:38.648039  317625 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:10:38.648240  317625 config.go:182] Loaded profile config "default-k8s-diff-port-017274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:38.648357  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:38.667541  317625 main.go:143] libmachine: Using SSH client type: native
	I1029 09:10:38.667753  317625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1029 09:10:38.667772  317625 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:10:38.975442  317625 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:10:38.975473  317625 machine.go:97] duration metric: took 4.130433747s to provisionDockerMachine
	I1029 09:10:38.975486  317625 start.go:293] postStartSetup for "default-k8s-diff-port-017274" (driver="docker")
	I1029 09:10:38.975500  317625 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:10:38.975556  317625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:10:38.975615  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:38.996683  317625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:10:39.098686  317625 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:10:39.102376  317625 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:10:39.102403  317625 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:10:39.102416  317625 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/addons for local assets ...
	I1029 09:10:39.102475  317625 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/files for local assets ...
	I1029 09:10:39.102576  317625 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem -> 72182.pem in /etc/ssl/certs
	I1029 09:10:39.102699  317625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:10:39.111386  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:10:39.131438  317625 start.go:296] duration metric: took 155.934122ms for postStartSetup
	I1029 09:10:39.131530  317625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:10:39.131572  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:39.150633  317625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:10:39.249529  317625 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:10:39.254981  317625 fix.go:56] duration metric: took 4.738564782s for fixHost
	I1029 09:10:39.255038  317625 start.go:83] releasing machines lock for "default-k8s-diff-port-017274", held for 4.738649406s
	I1029 09:10:39.255109  317625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-017274
	I1029 09:10:39.275262  317625 ssh_runner.go:195] Run: cat /version.json
	I1029 09:10:39.275282  317625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:10:39.275327  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:39.275352  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:39.298403  317625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:10:39.298411  317625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:10:39.456100  317625 ssh_runner.go:195] Run: systemctl --version
	I1029 09:10:39.463044  317625 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:10:39.502396  317625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:10:39.507564  317625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:10:39.507644  317625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:10:39.516520  317625 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:10:39.516543  317625 start.go:496] detecting cgroup driver to use...
	I1029 09:10:39.516577  317625 detect.go:190] detected "systemd" cgroup driver on host os
	I1029 09:10:39.516626  317625 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:10:39.533429  317625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:10:39.547753  317625 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:10:39.547827  317625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:10:39.564972  317625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:10:39.579374  317625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:10:39.667625  317625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:10:39.763084  317625 docker.go:234] disabling docker service ...
	I1029 09:10:39.763154  317625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:10:39.779119  317625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:10:39.792652  317625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:10:39.880461  317625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:10:39.971132  317625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:10:39.984735  317625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:10:40.000092  317625 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:10:40.000147  317625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:40.010039  317625 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1029 09:10:40.010109  317625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:40.020031  317625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:40.029894  317625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:40.040068  317625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:10:40.049111  317625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:40.058809  317625 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:40.067789  317625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:40.077061  317625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:10:40.084847  317625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:10:40.092966  317625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:10:40.178284  317625 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:10:40.289591  317625 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:10:40.289652  317625 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:10:40.293745  317625 start.go:564] Will wait 60s for crictl version
	I1029 09:10:40.293800  317625 ssh_runner.go:195] Run: which crictl
	I1029 09:10:40.297520  317625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:10:40.322143  317625 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:10:40.322215  317625 ssh_runner.go:195] Run: crio --version
	I1029 09:10:40.353192  317625 ssh_runner.go:195] Run: crio --version
	I1029 09:10:40.384606  317625 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1029 09:10:36.900514  310203 pod_ready.go:104] pod "coredns-66bc5c9577-w9vf6" is not "Ready", error: <nil>
	W1029 09:10:39.397190  310203 pod_ready.go:104] pod "coredns-66bc5c9577-w9vf6" is not "Ready", error: <nil>
	W1029 09:10:38.404046  310655 pod_ready.go:104] pod "coredns-66bc5c9577-bgslp" is not "Ready", error: <nil>
	W1029 09:10:40.901823  310655 pod_ready.go:104] pod "coredns-66bc5c9577-bgslp" is not "Ready", error: <nil>
	I1029 09:10:40.385984  317625 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-017274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:10:40.406119  317625 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1029 09:10:40.410546  317625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:10:40.421854  317625 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-017274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:10:40.422057  317625 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:10:40.422131  317625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:10:40.455472  317625 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:10:40.455501  317625 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:10:40.455559  317625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:10:40.483067  317625 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:10:40.483097  317625 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:10:40.483107  317625 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1029 09:10:40.483256  317625 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-017274 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:10:40.483344  317625 ssh_runner.go:195] Run: crio config
	I1029 09:10:40.533203  317625 cni.go:84] Creating CNI manager for ""
	I1029 09:10:40.533227  317625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:10:40.533242  317625 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:10:40.533263  317625 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-017274 NodeName:default-k8s-diff-port-017274 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:10:40.533415  317625 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-017274"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:10:40.533488  317625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:10:40.542333  317625 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:10:40.542399  317625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:10:40.551010  317625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1029 09:10:40.565346  317625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:10:40.579148  317625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1029 09:10:40.592941  317625 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:10:40.597152  317625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:10:40.608235  317625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:10:40.695057  317625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:10:40.724636  317625 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274 for IP: 192.168.103.2
	I1029 09:10:40.724669  317625 certs.go:195] generating shared ca certs ...
	I1029 09:10:40.724690  317625 certs.go:227] acquiring lock for ca certs: {Name:mk2fcaaead4b0fcf1dc2cfc80d95b3cc12092f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:40.724875  317625 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key
	I1029 09:10:40.724934  317625 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key
	I1029 09:10:40.724949  317625 certs.go:257] generating profile certs ...
	I1029 09:10:40.725062  317625 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/client.key
	I1029 09:10:40.725143  317625 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.key.81f03550
	I1029 09:10:40.725196  317625 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/proxy-client.key
	I1029 09:10:40.725330  317625 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem (1338 bytes)
	W1029 09:10:40.725366  317625 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218_empty.pem, impossibly tiny 0 bytes
	I1029 09:10:40.725381  317625 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem (1675 bytes)
	I1029 09:10:40.725412  317625 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem (1078 bytes)
	I1029 09:10:40.725440  317625 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:10:40.725503  317625 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem (1679 bytes)
	I1029 09:10:40.725564  317625 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:10:40.726272  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:10:40.748226  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:10:40.769816  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:10:40.790962  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1029 09:10:40.818466  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 09:10:40.837938  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:10:40.856563  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:10:40.876160  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 09:10:40.897079  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem --> /usr/share/ca-certificates/7218.pem (1338 bytes)
	I1029 09:10:40.918537  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /usr/share/ca-certificates/72182.pem (1708 bytes)
	I1029 09:10:40.939583  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:10:40.959802  317625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:10:40.973536  317625 ssh_runner.go:195] Run: openssl version
	I1029 09:10:40.979957  317625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7218.pem && ln -fs /usr/share/ca-certificates/7218.pem /etc/ssl/certs/7218.pem"
	I1029 09:10:40.989882  317625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7218.pem
	I1029 09:10:40.994269  317625 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:26 /usr/share/ca-certificates/7218.pem
	I1029 09:10:40.994333  317625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7218.pem
	I1029 09:10:41.030413  317625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7218.pem /etc/ssl/certs/51391683.0"
	I1029 09:10:41.039525  317625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72182.pem && ln -fs /usr/share/ca-certificates/72182.pem /etc/ssl/certs/72182.pem"
	I1029 09:10:41.049899  317625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72182.pem
	I1029 09:10:41.054206  317625 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:26 /usr/share/ca-certificates/72182.pem
	I1029 09:10:41.054268  317625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72182.pem
	I1029 09:10:41.089851  317625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72182.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:10:41.098621  317625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:10:41.108714  317625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:10:41.113024  317625 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:10:41.113098  317625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:10:41.149728  317625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:10:41.159497  317625 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:10:41.163686  317625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:10:41.199647  317625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:10:41.237927  317625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:10:41.288183  317625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:10:41.342599  317625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:10:41.396902  317625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:10:41.439341  317625 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-017274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:10:41.439423  317625 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:10:41.439476  317625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:10:41.474047  317625 cri.go:89] found id: "7e6fae9cd623cd88656a304b8514161f51b751e23f1918df0f51d122620ec416"
	I1029 09:10:41.474074  317625 cri.go:89] found id: "f86c6058a709440c09ee461898fae3daf70e692c424c5d7e8f093887f7ac3e6e"
	I1029 09:10:41.474079  317625 cri.go:89] found id: "16de8e1e0e29b4272b860675eb3990e121068d5daeaec00a854feb51ab6b59c6"
	I1029 09:10:41.474084  317625 cri.go:89] found id: "bf3d3afb886dcc98b83711dba516b774e5c1d649904cdd75ab5a786f4f65ac21"
	I1029 09:10:41.474088  317625 cri.go:89] found id: ""
	I1029 09:10:41.474138  317625 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:10:41.489808  317625 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:41Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:10:41.489883  317625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:10:41.499065  317625 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:10:41.499088  317625 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:10:41.499129  317625 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:10:41.507512  317625 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:10:41.508826  317625 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-017274" does not appear in /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:41.509838  317625 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-3727/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-017274" cluster setting kubeconfig missing "default-k8s-diff-port-017274" context setting]
	I1029 09:10:41.511256  317625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:41.513619  317625 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:10:41.523103  317625 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1029 09:10:41.523143  317625 kubeadm.go:602] duration metric: took 24.047777ms to restartPrimaryControlPlane
	I1029 09:10:41.523154  317625 kubeadm.go:403] duration metric: took 83.819956ms to StartCluster
	I1029 09:10:41.523174  317625 settings.go:142] acquiring lock: {Name:mk07eebd81bddcab3dc3d429be8b09770a1732f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:41.523250  317625 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:41.525671  317625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:41.526020  317625 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:10:41.526060  317625 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:10:41.526171  317625 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-017274"
	I1029 09:10:41.526193  317625 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-017274"
	W1029 09:10:41.526202  317625 addons.go:248] addon storage-provisioner should already be in state true
	I1029 09:10:41.526198  317625 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-017274"
	I1029 09:10:41.526208  317625 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-017274"
	I1029 09:10:41.526229  317625 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-017274"
	I1029 09:10:41.526234  317625 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-017274"
	I1029 09:10:41.526235  317625 host.go:66] Checking if "default-k8s-diff-port-017274" exists ...
	W1029 09:10:41.526244  317625 addons.go:248] addon dashboard should already be in state true
	I1029 09:10:41.526253  317625 config.go:182] Loaded profile config "default-k8s-diff-port-017274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:41.526287  317625 host.go:66] Checking if "default-k8s-diff-port-017274" exists ...
	I1029 09:10:41.526567  317625 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Status}}
	I1029 09:10:41.526728  317625 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Status}}
	I1029 09:10:41.526745  317625 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Status}}
	I1029 09:10:41.528781  317625 out.go:179] * Verifying Kubernetes components...
	I1029 09:10:41.529928  317625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:10:41.554882  317625 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1029 09:10:41.555038  317625 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:10:41.556386  317625 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-017274"
	W1029 09:10:41.556418  317625 addons.go:248] addon default-storageclass should already be in state true
	I1029 09:10:41.556448  317625 host.go:66] Checking if "default-k8s-diff-port-017274" exists ...
	I1029 09:10:41.556527  317625 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:10:41.556545  317625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:10:41.556631  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:41.557030  317625 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Status}}
	I1029 09:10:41.558078  317625 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1029 09:10:41.559061  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1029 09:10:41.559084  317625 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1029 09:10:41.559152  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:41.595766  317625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:10:41.596336  317625 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:10:41.596359  317625 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:10:41.596412  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:41.598323  317625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:10:41.621894  317625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:10:41.677762  317625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:10:41.692918  317625 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-017274" to be "Ready" ...
	I1029 09:10:41.721869  317625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:10:41.722081  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1029 09:10:41.722108  317625 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1029 09:10:41.739007  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1029 09:10:41.739044  317625 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1029 09:10:41.739548  317625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:10:41.758644  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1029 09:10:41.759098  317625 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1029 09:10:41.779294  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1029 09:10:41.779319  317625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1029 09:10:41.798133  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1029 09:10:41.798159  317625 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1029 09:10:41.814316  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1029 09:10:41.814337  317625 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1029 09:10:41.829547  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1029 09:10:41.829593  317625 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1029 09:10:41.845418  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1029 09:10:41.845445  317625 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1029 09:10:41.858741  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:10:41.858771  317625 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1029 09:10:41.872545  317625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:10:43.050259  317625 node_ready.go:49] node "default-k8s-diff-port-017274" is "Ready"
	I1029 09:10:43.050303  317625 node_ready.go:38] duration metric: took 1.357336458s for node "default-k8s-diff-port-017274" to be "Ready" ...
	I1029 09:10:43.050321  317625 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:10:43.050385  317625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:10:43.591207  317625 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.869296499s)
	I1029 09:10:43.591271  317625 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.851643039s)
	I1029 09:10:43.591403  317625 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.718815325s)
	I1029 09:10:43.591423  317625 api_server.go:72] duration metric: took 2.065367362s to wait for apiserver process to appear ...
	I1029 09:10:43.591436  317625 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:10:43.591459  317625 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1029 09:10:43.593536  317625 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-017274 addons enable metrics-server
	
	I1029 09:10:43.596184  317625 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:10:43.596207  317625 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:10:43.598674  317625 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1029 09:10:41.399735  310203 pod_ready.go:104] pod "coredns-66bc5c9577-w9vf6" is not "Ready", error: <nil>
	I1029 09:10:42.398117  310203 pod_ready.go:94] pod "coredns-66bc5c9577-w9vf6" is "Ready"
	I1029 09:10:42.398147  310203 pod_ready.go:86] duration metric: took 35.507518283s for pod "coredns-66bc5c9577-w9vf6" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:42.401819  310203 pod_ready.go:83] waiting for pod "etcd-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:42.407605  310203 pod_ready.go:94] pod "etcd-embed-certs-834228" is "Ready"
	I1029 09:10:42.407633  310203 pod_ready.go:86] duration metric: took 5.78899ms for pod "etcd-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:42.410734  310203 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:42.416018  310203 pod_ready.go:94] pod "kube-apiserver-embed-certs-834228" is "Ready"
	I1029 09:10:42.416050  310203 pod_ready.go:86] duration metric: took 5.286046ms for pod "kube-apiserver-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:42.418748  310203 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:42.598311  310203 pod_ready.go:94] pod "kube-controller-manager-embed-certs-834228" is "Ready"
	I1029 09:10:42.598350  310203 pod_ready.go:86] duration metric: took 179.572408ms for pod "kube-controller-manager-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:42.796068  310203 pod_ready.go:83] waiting for pod "kube-proxy-bxthb" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.196699  310203 pod_ready.go:94] pod "kube-proxy-bxthb" is "Ready"
	I1029 09:10:43.196724  310203 pod_ready.go:86] duration metric: took 400.627762ms for pod "kube-proxy-bxthb" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.395333  310203 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.795204  310203 pod_ready.go:94] pod "kube-scheduler-embed-certs-834228" is "Ready"
	I1029 09:10:43.795239  310203 pod_ready.go:86] duration metric: took 399.876459ms for pod "kube-scheduler-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.795255  310203 pod_ready.go:40] duration metric: took 36.908358431s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:10:43.848632  310203 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:10:43.850954  310203 out.go:179] * Done! kubectl is now configured to use "embed-certs-834228" cluster and "default" namespace by default
	I1029 09:10:43.600518  317625 addons.go:515] duration metric: took 2.074462357s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1029 09:10:44.092439  317625 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1029 09:10:44.096824  317625 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:10:44.096852  317625 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:10:42.903820  310655 pod_ready.go:104] pod "coredns-66bc5c9577-bgslp" is not "Ready", error: <nil>
	I1029 09:10:43.902642  310655 pod_ready.go:94] pod "coredns-66bc5c9577-bgslp" is "Ready"
	I1029 09:10:43.902677  310655 pod_ready.go:86] duration metric: took 36.507075473s for pod "coredns-66bc5c9577-bgslp" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.905740  310655 pod_ready.go:83] waiting for pod "etcd-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.910817  310655 pod_ready.go:94] pod "etcd-no-preload-043790" is "Ready"
	I1029 09:10:43.910844  310655 pod_ready.go:86] duration metric: took 5.077264ms for pod "etcd-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.913403  310655 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.918300  310655 pod_ready.go:94] pod "kube-apiserver-no-preload-043790" is "Ready"
	I1029 09:10:43.918330  310655 pod_ready.go:86] duration metric: took 4.900383ms for pod "kube-apiserver-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.921011  310655 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:44.099529  310655 pod_ready.go:94] pod "kube-controller-manager-no-preload-043790" is "Ready"
	I1029 09:10:44.099560  310655 pod_ready.go:86] duration metric: took 178.519114ms for pod "kube-controller-manager-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:44.300282  310655 pod_ready.go:83] waiting for pod "kube-proxy-7dc8p" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:44.699624  310655 pod_ready.go:94] pod "kube-proxy-7dc8p" is "Ready"
	I1029 09:10:44.699659  310655 pod_ready.go:86] duration metric: took 399.349827ms for pod "kube-proxy-7dc8p" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:44.900331  310655 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:45.299663  310655 pod_ready.go:94] pod "kube-scheduler-no-preload-043790" is "Ready"
	I1029 09:10:45.299695  310655 pod_ready.go:86] duration metric: took 399.334148ms for pod "kube-scheduler-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:45.299716  310655 pod_ready.go:40] duration metric: took 37.909127197s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:10:45.346588  310655 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:10:45.348402  310655 out.go:179] * Done! kubectl is now configured to use "no-preload-043790" cluster and "default" namespace by default
	I1029 09:10:44.591493  317625 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1029 09:10:44.596021  317625 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1029 09:10:44.597305  317625 api_server.go:141] control plane version: v1.34.1
	I1029 09:10:44.597337  317625 api_server.go:131] duration metric: took 1.005889557s to wait for apiserver health ...
	I1029 09:10:44.597349  317625 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:10:44.601872  317625 system_pods.go:59] 8 kube-system pods found
	I1029 09:10:44.601911  317625 system_pods.go:61] "coredns-66bc5c9577-qtsxl" [c671126a-10b8-46ff-b868-24fb3c0c8271] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:10:44.601926  317625 system_pods.go:61] "etcd-default-k8s-diff-port-017274" [a2fbc310-b3d1-401a-970e-c4a22db898e5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:10:44.601942  317625 system_pods.go:61] "kindnet-tdtxm" [36fa8db0-2ffe-4766-b136-fc7ef839dfab] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 09:10:44.601964  317625 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-017274" [9614a86d-4fc5-47b3-aa96-a4adfa19424b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:10:44.601977  317625 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-017274" [2287dfc7-76ac-4fbb-b232-c09511cbed19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:10:44.602007  317625 system_pods.go:61] "kube-proxy-82xcl" [7881caf5-4a0e-483d-aa7d-1e777513587f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:10:44.602020  317625 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-017274" [90a03547-43ce-4036-9a92-3f5085fd62d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:10:44.602051  317625 system_pods.go:61] "storage-provisioner" [a2ec03f2-f2b6-42f9-a758-85de0d658ec3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:10:44.602062  317625 system_pods.go:74] duration metric: took 4.703797ms to wait for pod list to return data ...
	I1029 09:10:44.602076  317625 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:10:44.604749  317625 default_sa.go:45] found service account: "default"
	I1029 09:10:44.604771  317625 default_sa.go:55] duration metric: took 2.68857ms for default service account to be created ...
	I1029 09:10:44.604780  317625 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:10:44.608136  317625 system_pods.go:86] 8 kube-system pods found
	I1029 09:10:44.608169  317625 system_pods.go:89] "coredns-66bc5c9577-qtsxl" [c671126a-10b8-46ff-b868-24fb3c0c8271] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:10:44.608180  317625 system_pods.go:89] "etcd-default-k8s-diff-port-017274" [a2fbc310-b3d1-401a-970e-c4a22db898e5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:10:44.608193  317625 system_pods.go:89] "kindnet-tdtxm" [36fa8db0-2ffe-4766-b136-fc7ef839dfab] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 09:10:44.608202  317625 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-017274" [9614a86d-4fc5-47b3-aa96-a4adfa19424b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:10:44.608211  317625 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-017274" [2287dfc7-76ac-4fbb-b232-c09511cbed19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:10:44.608219  317625 system_pods.go:89] "kube-proxy-82xcl" [7881caf5-4a0e-483d-aa7d-1e777513587f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:10:44.608235  317625 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-017274" [90a03547-43ce-4036-9a92-3f5085fd62d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:10:44.608248  317625 system_pods.go:89] "storage-provisioner" [a2ec03f2-f2b6-42f9-a758-85de0d658ec3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:10:44.608258  317625 system_pods.go:126] duration metric: took 3.471244ms to wait for k8s-apps to be running ...
	I1029 09:10:44.608273  317625 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:10:44.608323  317625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:44.622522  317625 system_svc.go:56] duration metric: took 14.241486ms WaitForService to wait for kubelet
	I1029 09:10:44.622547  317625 kubeadm.go:587] duration metric: took 3.096493749s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:10:44.622564  317625 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:10:44.625775  317625 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 09:10:44.625803  317625 node_conditions.go:123] node cpu capacity is 8
	I1029 09:10:44.625819  317625 node_conditions.go:105] duration metric: took 3.250078ms to run NodePressure ...
	I1029 09:10:44.625834  317625 start.go:242] waiting for startup goroutines ...
	I1029 09:10:44.625843  317625 start.go:247] waiting for cluster config update ...
	I1029 09:10:44.625858  317625 start.go:256] writing updated cluster config ...
	I1029 09:10:44.626146  317625 ssh_runner.go:195] Run: rm -f paused
	I1029 09:10:44.630600  317625 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:10:44.634880  317625 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qtsxl" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 09:10:46.641345  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	W1029 09:10:48.641582  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 29 09:10:21 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:21.830663498Z" level=info msg="Started container" PID=1713 containerID=065d4e3b38e2fcbd8f186b59d39829b11ea55bdaafafe5912a82447471af87de description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2/dashboard-metrics-scraper id=22892e1f-05dc-4daf-88ac-74d2f2dc3205 name=/runtime.v1.RuntimeService/StartContainer sandboxID=54c8188b59bdbae05495b7a0bb2d513278a4aa20bfff556b2ca00fe523741af3
	Oct 29 09:10:22 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:22.781259528Z" level=info msg="Removing container: 4db58cdeb957bfd00e2ee03db0708c3f5d62266639e6245412b4fbb47e4e12dd" id=83fd0e83-5ef8-46e8-a1ac-16eba4dc63c0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:22 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:22.791411222Z" level=info msg="Removed container 4db58cdeb957bfd00e2ee03db0708c3f5d62266639e6245412b4fbb47e4e12dd: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2/dashboard-metrics-scraper" id=83fd0e83-5ef8-46e8-a1ac-16eba4dc63c0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.802769752Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=29856d36-5ecc-4986-8cfe-f6aa06661f5d name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.803751523Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4baefd6b-4128-4363-83cd-edea315daa10 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.804818511Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e219019e-f6bd-4970-9f2e-780c4f2fa05b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.804952403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.809610608Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.80984125Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ede10dac12f515b582d344523b588e252c991ddcc84e4eb8cff6773dcaed1357/merged/etc/passwd: no such file or directory"
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.809879951Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ede10dac12f515b582d344523b588e252c991ddcc84e4eb8cff6773dcaed1357/merged/etc/group: no such file or directory"
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.810211757Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.83451575Z" level=info msg="Created container 760a87238b2f69d30a492a8425720859425081870c4d1699bd2cd63e614eb1ac: kube-system/storage-provisioner/storage-provisioner" id=e219019e-f6bd-4970-9f2e-780c4f2fa05b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.835261748Z" level=info msg="Starting container: 760a87238b2f69d30a492a8425720859425081870c4d1699bd2cd63e614eb1ac" id=145cab5f-6a8b-4c93-b77f-ddccd64b7366 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.83721559Z" level=info msg="Started container" PID=1729 containerID=760a87238b2f69d30a492a8425720859425081870c4d1699bd2cd63e614eb1ac description=kube-system/storage-provisioner/storage-provisioner id=145cab5f-6a8b-4c93-b77f-ddccd64b7366 name=/runtime.v1.RuntimeService/StartContainer sandboxID=94507d456e4f70023e43b0d5e4fc64a2ff103f0bbac71d899e43d28220cdb158
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.685108332Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9bdb5ec4-915f-4fc2-a988-007e017c8573 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.686178152Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=527b285f-1fcb-4507-be26-9b5fd4049dc6 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.687202784Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2/dashboard-metrics-scraper" id=c991d8e6-10e8-4f5a-9c1f-3a294455f865 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.687358898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.693413387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.693940894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.724892815Z" level=info msg="Created container 30070075bf0d8949d74b69220a19c409ef57a25928f6cd1dc21dc144031e1f3a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2/dashboard-metrics-scraper" id=c991d8e6-10e8-4f5a-9c1f-3a294455f865 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.725650947Z" level=info msg="Starting container: 30070075bf0d8949d74b69220a19c409ef57a25928f6cd1dc21dc144031e1f3a" id=e1f80a64-6e3a-4c21-8eef-09648f99e81d name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.727562301Z" level=info msg="Started container" PID=1766 containerID=30070075bf0d8949d74b69220a19c409ef57a25928f6cd1dc21dc144031e1f3a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2/dashboard-metrics-scraper id=e1f80a64-6e3a-4c21-8eef-09648f99e81d name=/runtime.v1.RuntimeService/StartContainer sandboxID=54c8188b59bdbae05495b7a0bb2d513278a4aa20bfff556b2ca00fe523741af3
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.831192263Z" level=info msg="Removing container: 065d4e3b38e2fcbd8f186b59d39829b11ea55bdaafafe5912a82447471af87de" id=75e90672-0316-4dbf-94cb-f4c7d0c68f7f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.841081418Z" level=info msg="Removed container 065d4e3b38e2fcbd8f186b59d39829b11ea55bdaafafe5912a82447471af87de: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2/dashboard-metrics-scraper" id=75e90672-0316-4dbf-94cb-f4c7d0c68f7f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	30070075bf0d8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   2                   54c8188b59bdb       dashboard-metrics-scraper-5f989dc9cf-t2wb2       kubernetes-dashboard
	760a87238b2f6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   94507d456e4f7       storage-provisioner                              kube-system
	bcd2e5c5941b1       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   33 seconds ago      Running             kubernetes-dashboard        0                   55b0c5ca1243c       kubernetes-dashboard-8694d4445c-zt5m2            kubernetes-dashboard
	c487e8fe869e3       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           51 seconds ago      Running             coredns                     0                   a5c9225453bff       coredns-5dd5756b68-v5mr5                         kube-system
	c2923b1598f3b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   82d81fc36a99f       busybox                                          default
	4d9dad20289cc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   a0eb79b83d195       kindnet-7qztm                                    kube-system
	031a423a3b88f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   94507d456e4f7       storage-provisioner                              kube-system
	737af4626b5ae       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           52 seconds ago      Running             kube-proxy                  0                   69b645a5faa77       kube-proxy-8kpqf                                 kube-system
	eb75fa40098e3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           55 seconds ago      Running             etcd                        0                   c3fc3c7f4fba1       etcd-old-k8s-version-096492                      kube-system
	d92dd056da0fc       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           55 seconds ago      Running             kube-controller-manager     0                   4294f405906c3       kube-controller-manager-old-k8s-version-096492   kube-system
	f75d2e46364d0       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           55 seconds ago      Running             kube-apiserver              0                   8ac1d54dae710       kube-apiserver-old-k8s-version-096492            kube-system
	3c2ce552cdf8c       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           55 seconds ago      Running             kube-scheduler              0                   2c6d3ef5183ae       kube-scheduler-old-k8s-version-096492            kube-system
	
	
	==> coredns [c487e8fe869e3ca2313a2d3948922a35774499c95c5df3089ea171e2f4b4e5e9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:52469 - 34395 "HINFO IN 3988397917584717053.5984887629911655176. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.451363112s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-096492
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-096492
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=old-k8s-version-096492
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_08_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:08:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-096492
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:10:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:10:30 +0000   Wed, 29 Oct 2025 09:08:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:10:30 +0000   Wed, 29 Oct 2025 09:08:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:10:30 +0000   Wed, 29 Oct 2025 09:08:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:10:30 +0000   Wed, 29 Oct 2025 09:09:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-096492
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                9ea7cc04-2266-42af-af7f-14c5bd55b0ca
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-v5mr5                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-old-k8s-version-096492                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m1s
	  kube-system                 kindnet-7qztm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-096492             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-096492    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-8kpqf                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-096492             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-t2wb2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-zt5m2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 107s                   kube-proxy       
	  Normal  Starting                 51s                    kube-proxy       
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-096492 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-096492 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-096492 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m1s                   kubelet          Node old-k8s-version-096492 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m1s                   kubelet          Node old-k8s-version-096492 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m1s                   kubelet          Node old-k8s-version-096492 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s                   node-controller  Node old-k8s-version-096492 event: Registered Node old-k8s-version-096492 in Controller
	  Normal  NodeReady                96s                    kubelet          Node old-k8s-version-096492 status is now: NodeReady
	  Normal  Starting                 57s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)      kubelet          Node old-k8s-version-096492 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)      kubelet          Node old-k8s-version-096492 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)      kubelet          Node old-k8s-version-096492 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                    node-controller  Node old-k8s-version-096492 event: Registered Node old-k8s-version-096492 in Controller
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[  +7.860471] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea d1 bc 55 fa d3 08 06
	[  +0.057230] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[  +7.379065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 86 de 12 5c b8 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[Oct29 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 3e ab f0 10 3c 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 16 ea 00 8c 1d 08 06
	[  +4.650960] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 a3 db 56 0e bd 08 06
	[  +0.000357] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[ +10.158654] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	[Oct29 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 62 44 12 a2 6e 08 06
	[  +0.000472] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	
	
	==> etcd [eb75fa40098e331528b7e94c2e2a71c766423c56a220a24eaaa69f66efdce4b6] <==
	{"level":"info","ts":"2025-10-29T09:09:57.235768Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-29T09:09:57.235781Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-29T09:09:57.23623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-29T09:09:57.236439Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-29T09:09:57.236678Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:09:57.236742Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:09:57.23755Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-29T09:09:57.23766Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-29T09:09:57.237704Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-29T09:09:57.23786Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-29T09:09:57.23789Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-29T09:09:59.027587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-29T09:09:59.027634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-29T09:09:59.027696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-29T09:09:59.02771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-29T09:09:59.027723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-29T09:09:59.027733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-29T09:09:59.027746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-29T09:09:59.030073Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-096492 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-29T09:09:59.030081Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-29T09:09:59.030105Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-29T09:09:59.030402Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-29T09:09:59.030431Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-29T09:09:59.031157Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-29T09:09:59.031503Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 09:10:53 up 53 min,  0 user,  load average: 4.81, 4.11, 2.60
	Linux old-k8s-version-096492 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4d9dad20289cc57f254242baa3e4cb4a7f518fd33fd2534285dec97a5d521b07] <==
	I1029 09:10:01.349356       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:10:01.349768       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:10:01.350025       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:10:01.350086       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:10:01.350140       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:10:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:10:01.553684       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:10:01.553850       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:10:01.553866       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:10:01.554036       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:10:01.954246       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:10:01.954275       1 metrics.go:72] Registering metrics
	I1029 09:10:01.954343       1 controller.go:711] "Syncing nftables rules"
	I1029 09:10:11.555481       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:10:11.555544       1 main.go:301] handling current node
	I1029 09:10:21.554177       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:10:21.554216       1 main.go:301] handling current node
	I1029 09:10:31.554541       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:10:31.554573       1 main.go:301] handling current node
	I1029 09:10:41.554142       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:10:41.554188       1 main.go:301] handling current node
	I1029 09:10:51.558301       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:10:51.558342       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f75d2e46364d0954bc8168a45bbf13f9854e2c28802b489937d6d807e197c25c] <==
	I1029 09:10:00.091321       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1029 09:10:00.115834       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1029 09:10:00.116058       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1029 09:10:00.117773       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1029 09:10:00.117268       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1029 09:10:00.117339       1 shared_informer.go:318] Caches are synced for configmaps
	I1029 09:10:00.119893       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1029 09:10:00.119899       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1029 09:10:00.120147       1 aggregator.go:166] initial CRD sync complete...
	I1029 09:10:00.120164       1 autoregister_controller.go:141] Starting autoregister controller
	I1029 09:10:00.120171       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:10:00.120179       1 cache.go:39] Caches are synced for autoregister controller
	E1029 09:10:00.123260       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 09:10:00.153491       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:10:01.022403       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:10:01.084288       1 controller.go:624] quota admission added evaluator for: namespaces
	I1029 09:10:01.133006       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1029 09:10:01.158333       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:10:01.171727       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:10:01.184059       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1029 09:10:01.228793       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.157.13"}
	I1029 09:10:01.241876       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.49.76"}
	I1029 09:10:13.058436       1 controller.go:624] quota admission added evaluator for: endpoints
	I1029 09:10:13.099381       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1029 09:10:13.123521       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d92dd056da0fc02c82efa03b27cf291df638b891640d4514b8dea24f11e44842] <==
	I1029 09:10:13.148190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="34.581111ms"
	I1029 09:10:13.160193       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.874212ms"
	I1029 09:10:13.160308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.795µs"
	I1029 09:10:13.178393       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="41.49956ms"
	I1029 09:10:13.178621       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="85.097µs"
	I1029 09:10:13.180664       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1029 09:10:13.180998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="66.338µs"
	I1029 09:10:13.195431       1 shared_informer.go:318] Caches are synced for cronjob
	I1029 09:10:13.218610       1 shared_informer.go:318] Caches are synced for resource quota
	I1029 09:10:13.245768       1 shared_informer.go:318] Caches are synced for service account
	I1029 09:10:13.263496       1 shared_informer.go:318] Caches are synced for resource quota
	I1029 09:10:13.266807       1 shared_informer.go:318] Caches are synced for job
	I1029 09:10:13.312179       1 shared_informer.go:318] Caches are synced for namespace
	I1029 09:10:13.621439       1 shared_informer.go:318] Caches are synced for garbage collector
	I1029 09:10:13.621469       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1029 09:10:13.645777       1 shared_informer.go:318] Caches are synced for garbage collector
	I1029 09:10:19.796362       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.781253ms"
	I1029 09:10:19.797975       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="106.452µs"
	I1029 09:10:21.787447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.624µs"
	I1029 09:10:22.792554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="89.09µs"
	I1029 09:10:23.794195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.703µs"
	I1029 09:10:36.199687       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.094396ms"
	I1029 09:10:36.199813       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.471µs"
	I1029 09:10:39.842472       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.144µs"
	I1029 09:10:43.461303       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="89.751µs"
	
	
	==> kube-proxy [737af4626b5ae3892122d80b6d43829693d67087a126e1edab5cb80129fc0b89] <==
	I1029 09:10:01.126763       1 server_others.go:69] "Using iptables proxy"
	I1029 09:10:01.138282       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1029 09:10:01.168158       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:10:01.171074       1 server_others.go:152] "Using iptables Proxier"
	I1029 09:10:01.171112       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1029 09:10:01.171118       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1029 09:10:01.171154       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1029 09:10:01.171442       1 server.go:846] "Version info" version="v1.28.0"
	I1029 09:10:01.171464       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:10:01.172405       1 config.go:188] "Starting service config controller"
	I1029 09:10:01.172447       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1029 09:10:01.172446       1 config.go:97] "Starting endpoint slice config controller"
	I1029 09:10:01.172543       1 config.go:315] "Starting node config controller"
	I1029 09:10:01.172583       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1029 09:10:01.172600       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1029 09:10:01.273260       1 shared_informer.go:318] Caches are synced for service config
	I1029 09:10:01.273261       1 shared_informer.go:318] Caches are synced for node config
	I1029 09:10:01.274535       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3c2ce552cdf8c320285c2bb9f072826ac4a862ddc09798713d1491913854ccfa] <==
	I1029 09:09:57.584628       1 serving.go:348] Generated self-signed cert in-memory
	I1029 09:10:00.105286       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1029 09:10:00.105320       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:10:00.109956       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:10:00.109987       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1029 09:10:00.109960       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1029 09:10:00.110075       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1029 09:10:00.110071       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:10:00.110129       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1029 09:10:00.111215       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1029 09:10:00.111307       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1029 09:10:00.211147       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1029 09:10:00.211155       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1029 09:10:00.211169       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Oct 29 09:10:13 old-k8s-version-096492 kubelet[716]: I1029 09:10:13.146553     716 topology_manager.go:215] "Topology Admit Handler" podUID="f46f157e-bc03-44ee-8351-6e8f3b4da48e" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-zt5m2"
	Oct 29 09:10:13 old-k8s-version-096492 kubelet[716]: I1029 09:10:13.323248     716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9lmn\" (UniqueName: \"kubernetes.io/projected/f4193758-8c8c-48ab-8c47-8fab422e6ad2-kube-api-access-f9lmn\") pod \"dashboard-metrics-scraper-5f989dc9cf-t2wb2\" (UID: \"f4193758-8c8c-48ab-8c47-8fab422e6ad2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2"
	Oct 29 09:10:13 old-k8s-version-096492 kubelet[716]: I1029 09:10:13.323331     716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f46f157e-bc03-44ee-8351-6e8f3b4da48e-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-zt5m2\" (UID: \"f46f157e-bc03-44ee-8351-6e8f3b4da48e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-zt5m2"
	Oct 29 09:10:13 old-k8s-version-096492 kubelet[716]: I1029 09:10:13.323478     716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f4193758-8c8c-48ab-8c47-8fab422e6ad2-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-t2wb2\" (UID: \"f4193758-8c8c-48ab-8c47-8fab422e6ad2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2"
	Oct 29 09:10:13 old-k8s-version-096492 kubelet[716]: I1029 09:10:13.323574     716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjwcf\" (UniqueName: \"kubernetes.io/projected/f46f157e-bc03-44ee-8351-6e8f3b4da48e-kube-api-access-fjwcf\") pod \"kubernetes-dashboard-8694d4445c-zt5m2\" (UID: \"f46f157e-bc03-44ee-8351-6e8f3b4da48e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-zt5m2"
	Oct 29 09:10:21 old-k8s-version-096492 kubelet[716]: I1029 09:10:21.775050     716 scope.go:117] "RemoveContainer" containerID="4db58cdeb957bfd00e2ee03db0708c3f5d62266639e6245412b4fbb47e4e12dd"
	Oct 29 09:10:21 old-k8s-version-096492 kubelet[716]: I1029 09:10:21.787438     716 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-zt5m2" podStartSLOduration=2.95518771 podCreationTimestamp="2025-10-29 09:10:13 +0000 UTC" firstStartedPulling="2025-10-29 09:10:13.574721443 +0000 UTC m=+16.999034267" lastFinishedPulling="2025-10-29 09:10:19.406901181 +0000 UTC m=+22.831214009" observedRunningTime="2025-10-29 09:10:19.788690098 +0000 UTC m=+23.213002935" watchObservedRunningTime="2025-10-29 09:10:21.787367452 +0000 UTC m=+25.211680298"
	Oct 29 09:10:22 old-k8s-version-096492 kubelet[716]: I1029 09:10:22.779708     716 scope.go:117] "RemoveContainer" containerID="4db58cdeb957bfd00e2ee03db0708c3f5d62266639e6245412b4fbb47e4e12dd"
	Oct 29 09:10:22 old-k8s-version-096492 kubelet[716]: I1029 09:10:22.779886     716 scope.go:117] "RemoveContainer" containerID="065d4e3b38e2fcbd8f186b59d39829b11ea55bdaafafe5912a82447471af87de"
	Oct 29 09:10:22 old-k8s-version-096492 kubelet[716]: E1029 09:10:22.780348     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t2wb2_kubernetes-dashboard(f4193758-8c8c-48ab-8c47-8fab422e6ad2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2" podUID="f4193758-8c8c-48ab-8c47-8fab422e6ad2"
	Oct 29 09:10:23 old-k8s-version-096492 kubelet[716]: I1029 09:10:23.784230     716 scope.go:117] "RemoveContainer" containerID="065d4e3b38e2fcbd8f186b59d39829b11ea55bdaafafe5912a82447471af87de"
	Oct 29 09:10:23 old-k8s-version-096492 kubelet[716]: E1029 09:10:23.784575     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t2wb2_kubernetes-dashboard(f4193758-8c8c-48ab-8c47-8fab422e6ad2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2" podUID="f4193758-8c8c-48ab-8c47-8fab422e6ad2"
	Oct 29 09:10:24 old-k8s-version-096492 kubelet[716]: I1029 09:10:24.786026     716 scope.go:117] "RemoveContainer" containerID="065d4e3b38e2fcbd8f186b59d39829b11ea55bdaafafe5912a82447471af87de"
	Oct 29 09:10:24 old-k8s-version-096492 kubelet[716]: E1029 09:10:24.786315     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t2wb2_kubernetes-dashboard(f4193758-8c8c-48ab-8c47-8fab422e6ad2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2" podUID="f4193758-8c8c-48ab-8c47-8fab422e6ad2"
	Oct 29 09:10:31 old-k8s-version-096492 kubelet[716]: I1029 09:10:31.802215     716 scope.go:117] "RemoveContainer" containerID="031a423a3b88f15d8793b324f402b6d66cad2fce5d425423baf97566df02d968"
	Oct 29 09:10:39 old-k8s-version-096492 kubelet[716]: I1029 09:10:39.684399     716 scope.go:117] "RemoveContainer" containerID="065d4e3b38e2fcbd8f186b59d39829b11ea55bdaafafe5912a82447471af87de"
	Oct 29 09:10:39 old-k8s-version-096492 kubelet[716]: I1029 09:10:39.828584     716 scope.go:117] "RemoveContainer" containerID="065d4e3b38e2fcbd8f186b59d39829b11ea55bdaafafe5912a82447471af87de"
	Oct 29 09:10:39 old-k8s-version-096492 kubelet[716]: I1029 09:10:39.828896     716 scope.go:117] "RemoveContainer" containerID="30070075bf0d8949d74b69220a19c409ef57a25928f6cd1dc21dc144031e1f3a"
	Oct 29 09:10:39 old-k8s-version-096492 kubelet[716]: E1029 09:10:39.829254     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t2wb2_kubernetes-dashboard(f4193758-8c8c-48ab-8c47-8fab422e6ad2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2" podUID="f4193758-8c8c-48ab-8c47-8fab422e6ad2"
	Oct 29 09:10:43 old-k8s-version-096492 kubelet[716]: I1029 09:10:43.448598     716 scope.go:117] "RemoveContainer" containerID="30070075bf0d8949d74b69220a19c409ef57a25928f6cd1dc21dc144031e1f3a"
	Oct 29 09:10:43 old-k8s-version-096492 kubelet[716]: E1029 09:10:43.449018     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t2wb2_kubernetes-dashboard(f4193758-8c8c-48ab-8c47-8fab422e6ad2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2" podUID="f4193758-8c8c-48ab-8c47-8fab422e6ad2"
	Oct 29 09:10:50 old-k8s-version-096492 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:10:50 old-k8s-version-096492 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:10:50 old-k8s-version-096492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 29 09:10:50 old-k8s-version-096492 systemd[1]: kubelet.service: Consumed 1.627s CPU time.
	
	
	==> kubernetes-dashboard [bcd2e5c5941b1ea800a416d3424313ba461f9e10fec33ea99dbad02c9f819245] <==
	2025/10/29 09:10:19 Starting overwatch
	2025/10/29 09:10:19 Using namespace: kubernetes-dashboard
	2025/10/29 09:10:19 Using in-cluster config to connect to apiserver
	2025/10/29 09:10:19 Using secret token for csrf signing
	2025/10/29 09:10:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 09:10:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 09:10:19 Successful initial request to the apiserver, version: v1.28.0
	2025/10/29 09:10:19 Generating JWE encryption key
	2025/10/29 09:10:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 09:10:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 09:10:19 Initializing JWE encryption key from synchronized object
	2025/10/29 09:10:19 Creating in-cluster Sidecar client
	2025/10/29 09:10:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:10:19 Serving insecurely on HTTP port: 9090
	2025/10/29 09:10:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [031a423a3b88f15d8793b324f402b6d66cad2fce5d425423baf97566df02d968] <==
	I1029 09:10:01.085119       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1029 09:10:31.090332       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [760a87238b2f69d30a492a8425720859425081870c4d1699bd2cd63e614eb1ac] <==
	I1029 09:10:31.849232       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:10:31.857442       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:10:31.857490       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1029 09:10:49.257012       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:10:49.257136       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e7c8c98-8fd4-43b1-9dc7-61c97a398c0b", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-096492_fbf225a0-d214-4fb3-a439-f1ccb5621b69 became leader
	I1029 09:10:49.257219       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-096492_fbf225a0-d214-4fb3-a439-f1ccb5621b69!
	I1029 09:10:49.357980       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-096492_fbf225a0-d214-4fb3-a439-f1ccb5621b69!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-096492 -n old-k8s-version-096492
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-096492 -n old-k8s-version-096492: exit status 2 (346.813515ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-096492 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-096492
helpers_test.go:243: (dbg) docker inspect old-k8s-version-096492:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487",
	        "Created": "2025-10-29T09:08:32.774738315Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308800,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:09:49.690951743Z",
	            "FinishedAt": "2025-10-29T09:09:48.794323899Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487/hostname",
	        "HostsPath": "/var/lib/docker/containers/949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487/hosts",
	        "LogPath": "/var/lib/docker/containers/949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487/949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487-json.log",
	        "Name": "/old-k8s-version-096492",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-096492:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-096492",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "949e662a472437d7d71b8b7272e69d73a275764f2d899600e38cf3e0e92fe487",
	                "LowerDir": "/var/lib/docker/overlay2/3dd617d7720a614d5c6d58f2524fa03b6bedc6f5d6a5c3f937ac49410148bfab-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3dd617d7720a614d5c6d58f2524fa03b6bedc6f5d6a5c3f937ac49410148bfab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3dd617d7720a614d5c6d58f2524fa03b6bedc6f5d6a5c3f937ac49410148bfab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3dd617d7720a614d5c6d58f2524fa03b6bedc6f5d6a5c3f937ac49410148bfab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-096492",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-096492/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-096492",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-096492",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-096492",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c31c5e5371a49f25886b0c91a045e5f9ce816f17397531870382138ae048edb7",
	            "SandboxKey": "/var/run/docker/netns/c31c5e5371a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-096492": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:e9:9a:86:25:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e1d4705eea8799ddd63b1a9cbeb0ede40231eb0a1209d909b2eae8f7a7d7c543",
	                    "EndpointID": "3ea85180d968c45939a341f777a1713cd2efbb13655e27f51fef10bb487f0364",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-096492",
	                        "949e662a4724"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096492 -n old-k8s-version-096492
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096492 -n old-k8s-version-096492: exit status 2 (344.605999ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-096492 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-096492 logs -n 25: (1.133273797s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-240549 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ ssh     │ -p bridge-240549 sudo crio config                                                                                                                                                                                                             │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ delete  │ -p bridge-240549                                                                                                                                                                                                                              │ bridge-240549                │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ delete  │ -p disable-driver-mounts-318335                                                                                                                                                                                                               │ disable-driver-mounts-318335 │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-096492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p old-k8s-version-096492 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ addons  │ enable metrics-server -p embed-certs-834228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-043790 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p embed-certs-834228 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ stop    │ -p no-preload-043790 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-096492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p old-k8s-version-096492 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p embed-certs-834228 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p embed-certs-834228 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p no-preload-043790 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p no-preload-043790 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-017274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-017274 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-017274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ image   │ old-k8s-version-096492 image list --format=json                                                                                                                                                                                               │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p old-k8s-version-096492 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:10:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:10:34.314162  317625 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:10:34.314402  317625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:34.314410  317625 out.go:374] Setting ErrFile to fd 2...
	I1029 09:10:34.314414  317625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:34.314634  317625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:10:34.315081  317625 out.go:368] Setting JSON to false
	I1029 09:10:34.316495  317625 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3182,"bootTime":1761725852,"procs":380,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:10:34.316584  317625 start.go:143] virtualization: kvm guest
	I1029 09:10:34.318767  317625 out.go:179] * [default-k8s-diff-port-017274] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:10:34.320082  317625 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:10:34.320074  317625 notify.go:221] Checking for updates...
	I1029 09:10:34.322494  317625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:10:34.324124  317625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:34.325558  317625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:10:34.326930  317625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:10:34.328350  317625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:10:34.330256  317625 config.go:182] Loaded profile config "default-k8s-diff-port-017274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:34.330944  317625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:10:34.357479  317625 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:10:34.357608  317625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:34.422564  317625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-29 09:10:34.411959992 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:34.422671  317625 docker.go:319] overlay module found
	I1029 09:10:34.425110  317625 out.go:179] * Using the docker driver based on existing profile
	W1029 09:10:29.799925  308587 pod_ready.go:104] pod "coredns-5dd5756b68-v5mr5" is not "Ready", error: <nil>
	W1029 09:10:31.800684  308587 pod_ready.go:104] pod "coredns-5dd5756b68-v5mr5" is not "Ready", error: <nil>
	W1029 09:10:33.800746  308587 pod_ready.go:104] pod "coredns-5dd5756b68-v5mr5" is not "Ready", error: <nil>
	I1029 09:10:34.426363  317625 start.go:309] selected driver: docker
	I1029 09:10:34.426404  317625 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-017274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:10:34.426495  317625 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:10:34.427114  317625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:34.488245  317625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-29 09:10:34.476753433 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:34.488557  317625 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:10:34.488588  317625 cni.go:84] Creating CNI manager for ""
	I1029 09:10:34.488640  317625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:10:34.488711  317625 start.go:353] cluster config:
	{Name:default-k8s-diff-port-017274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:10:34.490469  317625 out.go:179] * Starting "default-k8s-diff-port-017274" primary control-plane node in "default-k8s-diff-port-017274" cluster
	I1029 09:10:34.491673  317625 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:10:34.492880  317625 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:10:34.494011  317625 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:10:34.494059  317625 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:10:34.494079  317625 cache.go:59] Caching tarball of preloaded images
	I1029 09:10:34.494112  317625 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:10:34.494176  317625 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:10:34.494189  317625 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:10:34.494299  317625 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/config.json ...
	I1029 09:10:34.516205  317625 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:10:34.516231  317625 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:10:34.516249  317625 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:10:34.516279  317625 start.go:360] acquireMachinesLock for default-k8s-diff-port-017274: {Name:mkec68307c2ffe0cd4f9e8fcf3c8e2dc4c6d4bfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:10:34.516374  317625 start.go:364] duration metric: took 69.184µs to acquireMachinesLock for "default-k8s-diff-port-017274"
	I1029 09:10:34.516399  317625 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:10:34.516408  317625 fix.go:54] fixHost starting: 
	I1029 09:10:34.516710  317625 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Status}}
	I1029 09:10:34.536101  317625 fix.go:112] recreateIfNeeded on default-k8s-diff-port-017274: state=Stopped err=<nil>
	W1029 09:10:34.536145  317625 fix.go:138] unexpected machine state, will restart: <nil>
	W1029 09:10:32.897121  310203 pod_ready.go:104] pod "coredns-66bc5c9577-w9vf6" is not "Ready", error: <nil>
	W1029 09:10:34.897709  310203 pod_ready.go:104] pod "coredns-66bc5c9577-w9vf6" is not "Ready", error: <nil>
	W1029 09:10:33.402105  310655 pod_ready.go:104] pod "coredns-66bc5c9577-bgslp" is not "Ready", error: <nil>
	W1029 09:10:35.901778  310655 pod_ready.go:104] pod "coredns-66bc5c9577-bgslp" is not "Ready", error: <nil>
	I1029 09:10:36.302413  308587 pod_ready.go:94] pod "coredns-5dd5756b68-v5mr5" is "Ready"
	I1029 09:10:36.302442  308587 pod_ready.go:86] duration metric: took 34.508479031s for pod "coredns-5dd5756b68-v5mr5" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:36.305786  308587 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-096492" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:36.310818  308587 pod_ready.go:94] pod "etcd-old-k8s-version-096492" is "Ready"
	I1029 09:10:36.310845  308587 pod_ready.go:86] duration metric: took 5.032342ms for pod "etcd-old-k8s-version-096492" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:36.313610  308587 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-096492" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:36.318204  308587 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-096492" is "Ready"
	I1029 09:10:36.318230  308587 pod_ready.go:86] duration metric: took 4.597684ms for pod "kube-apiserver-old-k8s-version-096492" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:36.321349  308587 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-096492" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:36.497957  308587 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-096492" is "Ready"
	I1029 09:10:36.497981  308587 pod_ready.go:86] duration metric: took 176.608582ms for pod "kube-controller-manager-old-k8s-version-096492" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:36.698657  308587 pod_ready.go:83] waiting for pod "kube-proxy-8kpqf" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:37.097318  308587 pod_ready.go:94] pod "kube-proxy-8kpqf" is "Ready"
	I1029 09:10:37.097343  308587 pod_ready.go:86] duration metric: took 398.661369ms for pod "kube-proxy-8kpqf" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:37.298274  308587 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-096492" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:37.697624  308587 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-096492" is "Ready"
	I1029 09:10:37.697650  308587 pod_ready.go:86] duration metric: took 399.348612ms for pod "kube-scheduler-old-k8s-version-096492" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:37.697661  308587 pod_ready.go:40] duration metric: took 35.908433904s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:10:37.743274  308587 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1029 09:10:37.744939  308587 out.go:203] 
	W1029 09:10:37.746335  308587 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1029 09:10:37.747520  308587 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1029 09:10:37.748896  308587 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-096492" cluster and "default" namespace by default
	I1029 09:10:34.538054  317625 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-017274" ...
	I1029 09:10:34.538151  317625 cli_runner.go:164] Run: docker start default-k8s-diff-port-017274
	I1029 09:10:34.805129  317625 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Status}}
	I1029 09:10:34.824411  317625 kic.go:430] container "default-k8s-diff-port-017274" state is running.
	I1029 09:10:34.824754  317625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-017274
	I1029 09:10:34.844676  317625 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/config.json ...
	I1029 09:10:34.845019  317625 machine.go:94] provisionDockerMachine start ...
	I1029 09:10:34.845114  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:34.865023  317625 main.go:143] libmachine: Using SSH client type: native
	I1029 09:10:34.865270  317625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1029 09:10:34.865283  317625 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:10:34.865957  317625 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45588->127.0.0.1:33123: read: connection reset by peer
	I1029 09:10:38.011366  317625 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-017274
	
	I1029 09:10:38.011394  317625 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-017274"
	I1029 09:10:38.011458  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:38.033345  317625 main.go:143] libmachine: Using SSH client type: native
	I1029 09:10:38.033651  317625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1029 09:10:38.033690  317625 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-017274 && echo "default-k8s-diff-port-017274" | sudo tee /etc/hostname
	I1029 09:10:38.194487  317625 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-017274
	
	I1029 09:10:38.194582  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:38.214354  317625 main.go:143] libmachine: Using SSH client type: native
	I1029 09:10:38.214605  317625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1029 09:10:38.214636  317625 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-017274' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-017274/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-017274' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:10:38.362074  317625 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:10:38.362104  317625 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-3727/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-3727/.minikube}
	I1029 09:10:38.362154  317625 ubuntu.go:190] setting up certificates
	I1029 09:10:38.362168  317625 provision.go:84] configureAuth start
	I1029 09:10:38.362240  317625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-017274
	I1029 09:10:38.380518  317625 provision.go:143] copyHostCerts
	I1029 09:10:38.380587  317625 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem, removing ...
	I1029 09:10:38.380602  317625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem
	I1029 09:10:38.380681  317625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem (1078 bytes)
	I1029 09:10:38.380829  317625 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem, removing ...
	I1029 09:10:38.380845  317625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem
	I1029 09:10:38.380891  317625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem (1123 bytes)
	I1029 09:10:38.380976  317625 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem, removing ...
	I1029 09:10:38.380987  317625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem
	I1029 09:10:38.381054  317625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem (1679 bytes)
	I1029 09:10:38.381120  317625 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-017274 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-017274 localhost minikube]
	I1029 09:10:38.466350  317625 provision.go:177] copyRemoteCerts
	I1029 09:10:38.466416  317625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:10:38.466452  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:38.487559  317625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:10:38.590735  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1029 09:10:38.609285  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1029 09:10:38.628229  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:10:38.648008  317625 provision.go:87] duration metric: took 285.801018ms to configureAuth
	I1029 09:10:38.648039  317625 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:10:38.648240  317625 config.go:182] Loaded profile config "default-k8s-diff-port-017274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:38.648357  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:38.667541  317625 main.go:143] libmachine: Using SSH client type: native
	I1029 09:10:38.667753  317625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1029 09:10:38.667772  317625 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:10:38.975442  317625 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:10:38.975473  317625 machine.go:97] duration metric: took 4.130433747s to provisionDockerMachine
	I1029 09:10:38.975486  317625 start.go:293] postStartSetup for "default-k8s-diff-port-017274" (driver="docker")
	I1029 09:10:38.975500  317625 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:10:38.975556  317625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:10:38.975615  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:38.996683  317625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:10:39.098686  317625 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:10:39.102376  317625 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:10:39.102403  317625 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:10:39.102416  317625 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/addons for local assets ...
	I1029 09:10:39.102475  317625 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/files for local assets ...
	I1029 09:10:39.102576  317625 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem -> 72182.pem in /etc/ssl/certs
	I1029 09:10:39.102699  317625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:10:39.111386  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:10:39.131438  317625 start.go:296] duration metric: took 155.934122ms for postStartSetup
	I1029 09:10:39.131530  317625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:10:39.131572  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:39.150633  317625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:10:39.249529  317625 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:10:39.254981  317625 fix.go:56] duration metric: took 4.738564782s for fixHost
	I1029 09:10:39.255038  317625 start.go:83] releasing machines lock for "default-k8s-diff-port-017274", held for 4.738649406s
	I1029 09:10:39.255109  317625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-017274
	I1029 09:10:39.275262  317625 ssh_runner.go:195] Run: cat /version.json
	I1029 09:10:39.275282  317625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:10:39.275327  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:39.275352  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:39.298403  317625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:10:39.298411  317625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:10:39.456100  317625 ssh_runner.go:195] Run: systemctl --version
	I1029 09:10:39.463044  317625 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:10:39.502396  317625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:10:39.507564  317625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:10:39.507644  317625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:10:39.516520  317625 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:10:39.516543  317625 start.go:496] detecting cgroup driver to use...
	I1029 09:10:39.516577  317625 detect.go:190] detected "systemd" cgroup driver on host os
	I1029 09:10:39.516626  317625 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:10:39.533429  317625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:10:39.547753  317625 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:10:39.547827  317625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:10:39.564972  317625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:10:39.579374  317625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:10:39.667625  317625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:10:39.763084  317625 docker.go:234] disabling docker service ...
	I1029 09:10:39.763154  317625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:10:39.779119  317625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:10:39.792652  317625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:10:39.880461  317625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:10:39.971132  317625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:10:39.984735  317625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:10:40.000092  317625 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:10:40.000147  317625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:40.010039  317625 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1029 09:10:40.010109  317625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:40.020031  317625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:40.029894  317625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:40.040068  317625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:10:40.049111  317625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:40.058809  317625 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:40.067789  317625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:10:40.077061  317625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:10:40.084847  317625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:10:40.092966  317625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:10:40.178284  317625 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:10:40.289591  317625 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:10:40.289652  317625 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:10:40.293745  317625 start.go:564] Will wait 60s for crictl version
	I1029 09:10:40.293800  317625 ssh_runner.go:195] Run: which crictl
	I1029 09:10:40.297520  317625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:10:40.322143  317625 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:10:40.322215  317625 ssh_runner.go:195] Run: crio --version
	I1029 09:10:40.353192  317625 ssh_runner.go:195] Run: crio --version
	I1029 09:10:40.384606  317625 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1029 09:10:36.900514  310203 pod_ready.go:104] pod "coredns-66bc5c9577-w9vf6" is not "Ready", error: <nil>
	W1029 09:10:39.397190  310203 pod_ready.go:104] pod "coredns-66bc5c9577-w9vf6" is not "Ready", error: <nil>
	W1029 09:10:38.404046  310655 pod_ready.go:104] pod "coredns-66bc5c9577-bgslp" is not "Ready", error: <nil>
	W1029 09:10:40.901823  310655 pod_ready.go:104] pod "coredns-66bc5c9577-bgslp" is not "Ready", error: <nil>
	I1029 09:10:40.385984  317625 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-017274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:10:40.406119  317625 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1029 09:10:40.410546  317625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:10:40.421854  317625 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-017274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:10:40.422057  317625 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:10:40.422131  317625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:10:40.455472  317625 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:10:40.455501  317625 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:10:40.455559  317625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:10:40.483067  317625 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:10:40.483097  317625 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:10:40.483107  317625 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1029 09:10:40.483256  317625 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-017274 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:10:40.483344  317625 ssh_runner.go:195] Run: crio config
	I1029 09:10:40.533203  317625 cni.go:84] Creating CNI manager for ""
	I1029 09:10:40.533227  317625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:10:40.533242  317625 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:10:40.533263  317625 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-017274 NodeName:default-k8s-diff-port-017274 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:10:40.533415  317625 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-017274"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:10:40.533488  317625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:10:40.542333  317625 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:10:40.542399  317625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:10:40.551010  317625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1029 09:10:40.565346  317625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:10:40.579148  317625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1029 09:10:40.592941  317625 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:10:40.597152  317625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:10:40.608235  317625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:10:40.695057  317625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:10:40.724636  317625 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274 for IP: 192.168.103.2
	I1029 09:10:40.724669  317625 certs.go:195] generating shared ca certs ...
	I1029 09:10:40.724690  317625 certs.go:227] acquiring lock for ca certs: {Name:mk2fcaaead4b0fcf1dc2cfc80d95b3cc12092f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:40.724875  317625 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key
	I1029 09:10:40.724934  317625 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key
	I1029 09:10:40.724949  317625 certs.go:257] generating profile certs ...
	I1029 09:10:40.725062  317625 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/client.key
	I1029 09:10:40.725143  317625 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.key.81f03550
	I1029 09:10:40.725196  317625 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/proxy-client.key
	I1029 09:10:40.725330  317625 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem (1338 bytes)
	W1029 09:10:40.725366  317625 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218_empty.pem, impossibly tiny 0 bytes
	I1029 09:10:40.725381  317625 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem (1675 bytes)
	I1029 09:10:40.725412  317625 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem (1078 bytes)
	I1029 09:10:40.725440  317625 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:10:40.725503  317625 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem (1679 bytes)
	I1029 09:10:40.725564  317625 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:10:40.726272  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:10:40.748226  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:10:40.769816  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:10:40.790962  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1029 09:10:40.818466  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1029 09:10:40.837938  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:10:40.856563  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:10:40.876160  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/default-k8s-diff-port-017274/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 09:10:40.897079  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem --> /usr/share/ca-certificates/7218.pem (1338 bytes)
	I1029 09:10:40.918537  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /usr/share/ca-certificates/72182.pem (1708 bytes)
	I1029 09:10:40.939583  317625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:10:40.959802  317625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:10:40.973536  317625 ssh_runner.go:195] Run: openssl version
	I1029 09:10:40.979957  317625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7218.pem && ln -fs /usr/share/ca-certificates/7218.pem /etc/ssl/certs/7218.pem"
	I1029 09:10:40.989882  317625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7218.pem
	I1029 09:10:40.994269  317625 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:26 /usr/share/ca-certificates/7218.pem
	I1029 09:10:40.994333  317625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7218.pem
	I1029 09:10:41.030413  317625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7218.pem /etc/ssl/certs/51391683.0"
	I1029 09:10:41.039525  317625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72182.pem && ln -fs /usr/share/ca-certificates/72182.pem /etc/ssl/certs/72182.pem"
	I1029 09:10:41.049899  317625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72182.pem
	I1029 09:10:41.054206  317625 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:26 /usr/share/ca-certificates/72182.pem
	I1029 09:10:41.054268  317625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72182.pem
	I1029 09:10:41.089851  317625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72182.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:10:41.098621  317625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:10:41.108714  317625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:10:41.113024  317625 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:10:41.113098  317625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:10:41.149728  317625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:10:41.159497  317625 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:10:41.163686  317625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:10:41.199647  317625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:10:41.237927  317625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:10:41.288183  317625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:10:41.342599  317625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:10:41.396902  317625 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:10:41.439341  317625 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-017274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-017274 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:10:41.439423  317625 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:10:41.439476  317625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:10:41.474047  317625 cri.go:89] found id: "7e6fae9cd623cd88656a304b8514161f51b751e23f1918df0f51d122620ec416"
	I1029 09:10:41.474074  317625 cri.go:89] found id: "f86c6058a709440c09ee461898fae3daf70e692c424c5d7e8f093887f7ac3e6e"
	I1029 09:10:41.474079  317625 cri.go:89] found id: "16de8e1e0e29b4272b860675eb3990e121068d5daeaec00a854feb51ab6b59c6"
	I1029 09:10:41.474084  317625 cri.go:89] found id: "bf3d3afb886dcc98b83711dba516b774e5c1d649904cdd75ab5a786f4f65ac21"
	I1029 09:10:41.474088  317625 cri.go:89] found id: ""
	I1029 09:10:41.474138  317625 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:10:41.489808  317625 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:41Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:10:41.489883  317625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:10:41.499065  317625 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:10:41.499088  317625 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:10:41.499129  317625 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:10:41.507512  317625 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:10:41.508826  317625 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-017274" does not appear in /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:41.509838  317625 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-3727/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-017274" cluster setting kubeconfig missing "default-k8s-diff-port-017274" context setting]
	I1029 09:10:41.511256  317625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:41.513619  317625 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:10:41.523103  317625 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1029 09:10:41.523143  317625 kubeadm.go:602] duration metric: took 24.047777ms to restartPrimaryControlPlane
	I1029 09:10:41.523154  317625 kubeadm.go:403] duration metric: took 83.819956ms to StartCluster
	I1029 09:10:41.523174  317625 settings.go:142] acquiring lock: {Name:mk07eebd81bddcab3dc3d429be8b09770a1732f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:41.523250  317625 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:41.525671  317625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:41.526020  317625 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:10:41.526060  317625 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:10:41.526171  317625 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-017274"
	I1029 09:10:41.526193  317625 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-017274"
	W1029 09:10:41.526202  317625 addons.go:248] addon storage-provisioner should already be in state true
	I1029 09:10:41.526198  317625 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-017274"
	I1029 09:10:41.526208  317625 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-017274"
	I1029 09:10:41.526229  317625 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-017274"
	I1029 09:10:41.526234  317625 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-017274"
	I1029 09:10:41.526235  317625 host.go:66] Checking if "default-k8s-diff-port-017274" exists ...
	W1029 09:10:41.526244  317625 addons.go:248] addon dashboard should already be in state true
	I1029 09:10:41.526253  317625 config.go:182] Loaded profile config "default-k8s-diff-port-017274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:41.526287  317625 host.go:66] Checking if "default-k8s-diff-port-017274" exists ...
	I1029 09:10:41.526567  317625 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Status}}
	I1029 09:10:41.526728  317625 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Status}}
	I1029 09:10:41.526745  317625 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Status}}
	I1029 09:10:41.528781  317625 out.go:179] * Verifying Kubernetes components...
	I1029 09:10:41.529928  317625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:10:41.554882  317625 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1029 09:10:41.555038  317625 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:10:41.556386  317625 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-017274"
	W1029 09:10:41.556418  317625 addons.go:248] addon default-storageclass should already be in state true
	I1029 09:10:41.556448  317625 host.go:66] Checking if "default-k8s-diff-port-017274" exists ...
	I1029 09:10:41.556527  317625 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:10:41.556545  317625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:10:41.556631  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:41.557030  317625 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Status}}
	I1029 09:10:41.558078  317625 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1029 09:10:41.559061  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1029 09:10:41.559084  317625 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1029 09:10:41.559152  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:41.595766  317625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:10:41.596336  317625 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:10:41.596359  317625 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:10:41.596412  317625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:10:41.598323  317625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:10:41.621894  317625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:10:41.677762  317625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:10:41.692918  317625 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-017274" to be "Ready" ...
	I1029 09:10:41.721869  317625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:10:41.722081  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1029 09:10:41.722108  317625 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1029 09:10:41.739007  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1029 09:10:41.739044  317625 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1029 09:10:41.739548  317625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:10:41.758644  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1029 09:10:41.759098  317625 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1029 09:10:41.779294  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1029 09:10:41.779319  317625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1029 09:10:41.798133  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1029 09:10:41.798159  317625 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1029 09:10:41.814316  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1029 09:10:41.814337  317625 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1029 09:10:41.829547  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1029 09:10:41.829593  317625 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1029 09:10:41.845418  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1029 09:10:41.845445  317625 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1029 09:10:41.858741  317625 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:10:41.858771  317625 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1029 09:10:41.872545  317625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:10:43.050259  317625 node_ready.go:49] node "default-k8s-diff-port-017274" is "Ready"
	I1029 09:10:43.050303  317625 node_ready.go:38] duration metric: took 1.357336458s for node "default-k8s-diff-port-017274" to be "Ready" ...
	I1029 09:10:43.050321  317625 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:10:43.050385  317625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:10:43.591207  317625 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.869296499s)
	I1029 09:10:43.591271  317625 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.851643039s)
	I1029 09:10:43.591403  317625 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.718815325s)
	I1029 09:10:43.591423  317625 api_server.go:72] duration metric: took 2.065367362s to wait for apiserver process to appear ...
	I1029 09:10:43.591436  317625 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:10:43.591459  317625 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1029 09:10:43.593536  317625 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-017274 addons enable metrics-server
	
	I1029 09:10:43.596184  317625 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:10:43.596207  317625 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:10:43.598674  317625 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1029 09:10:41.399735  310203 pod_ready.go:104] pod "coredns-66bc5c9577-w9vf6" is not "Ready", error: <nil>
	I1029 09:10:42.398117  310203 pod_ready.go:94] pod "coredns-66bc5c9577-w9vf6" is "Ready"
	I1029 09:10:42.398147  310203 pod_ready.go:86] duration metric: took 35.507518283s for pod "coredns-66bc5c9577-w9vf6" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:42.401819  310203 pod_ready.go:83] waiting for pod "etcd-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:42.407605  310203 pod_ready.go:94] pod "etcd-embed-certs-834228" is "Ready"
	I1029 09:10:42.407633  310203 pod_ready.go:86] duration metric: took 5.78899ms for pod "etcd-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:42.410734  310203 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:42.416018  310203 pod_ready.go:94] pod "kube-apiserver-embed-certs-834228" is "Ready"
	I1029 09:10:42.416050  310203 pod_ready.go:86] duration metric: took 5.286046ms for pod "kube-apiserver-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:42.418748  310203 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:42.598311  310203 pod_ready.go:94] pod "kube-controller-manager-embed-certs-834228" is "Ready"
	I1029 09:10:42.598350  310203 pod_ready.go:86] duration metric: took 179.572408ms for pod "kube-controller-manager-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:42.796068  310203 pod_ready.go:83] waiting for pod "kube-proxy-bxthb" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.196699  310203 pod_ready.go:94] pod "kube-proxy-bxthb" is "Ready"
	I1029 09:10:43.196724  310203 pod_ready.go:86] duration metric: took 400.627762ms for pod "kube-proxy-bxthb" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.395333  310203 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.795204  310203 pod_ready.go:94] pod "kube-scheduler-embed-certs-834228" is "Ready"
	I1029 09:10:43.795239  310203 pod_ready.go:86] duration metric: took 399.876459ms for pod "kube-scheduler-embed-certs-834228" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.795255  310203 pod_ready.go:40] duration metric: took 36.908358431s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:10:43.848632  310203 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:10:43.850954  310203 out.go:179] * Done! kubectl is now configured to use "embed-certs-834228" cluster and "default" namespace by default
	I1029 09:10:43.600518  317625 addons.go:515] duration metric: took 2.074462357s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1029 09:10:44.092439  317625 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1029 09:10:44.096824  317625 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:10:44.096852  317625 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:10:42.903820  310655 pod_ready.go:104] pod "coredns-66bc5c9577-bgslp" is not "Ready", error: <nil>
	I1029 09:10:43.902642  310655 pod_ready.go:94] pod "coredns-66bc5c9577-bgslp" is "Ready"
	I1029 09:10:43.902677  310655 pod_ready.go:86] duration metric: took 36.507075473s for pod "coredns-66bc5c9577-bgslp" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.905740  310655 pod_ready.go:83] waiting for pod "etcd-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.910817  310655 pod_ready.go:94] pod "etcd-no-preload-043790" is "Ready"
	I1029 09:10:43.910844  310655 pod_ready.go:86] duration metric: took 5.077264ms for pod "etcd-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.913403  310655 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.918300  310655 pod_ready.go:94] pod "kube-apiserver-no-preload-043790" is "Ready"
	I1029 09:10:43.918330  310655 pod_ready.go:86] duration metric: took 4.900383ms for pod "kube-apiserver-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:43.921011  310655 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:44.099529  310655 pod_ready.go:94] pod "kube-controller-manager-no-preload-043790" is "Ready"
	I1029 09:10:44.099560  310655 pod_ready.go:86] duration metric: took 178.519114ms for pod "kube-controller-manager-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:44.300282  310655 pod_ready.go:83] waiting for pod "kube-proxy-7dc8p" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:44.699624  310655 pod_ready.go:94] pod "kube-proxy-7dc8p" is "Ready"
	I1029 09:10:44.699659  310655 pod_ready.go:86] duration metric: took 399.349827ms for pod "kube-proxy-7dc8p" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:44.900331  310655 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:45.299663  310655 pod_ready.go:94] pod "kube-scheduler-no-preload-043790" is "Ready"
	I1029 09:10:45.299695  310655 pod_ready.go:86] duration metric: took 399.334148ms for pod "kube-scheduler-no-preload-043790" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:10:45.299716  310655 pod_ready.go:40] duration metric: took 37.909127197s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:10:45.346588  310655 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:10:45.348402  310655 out.go:179] * Done! kubectl is now configured to use "no-preload-043790" cluster and "default" namespace by default
	I1029 09:10:44.591493  317625 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1029 09:10:44.596021  317625 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1029 09:10:44.597305  317625 api_server.go:141] control plane version: v1.34.1
	I1029 09:10:44.597337  317625 api_server.go:131] duration metric: took 1.005889557s to wait for apiserver health ...
	I1029 09:10:44.597349  317625 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:10:44.601872  317625 system_pods.go:59] 8 kube-system pods found
	I1029 09:10:44.601911  317625 system_pods.go:61] "coredns-66bc5c9577-qtsxl" [c671126a-10b8-46ff-b868-24fb3c0c8271] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:10:44.601926  317625 system_pods.go:61] "etcd-default-k8s-diff-port-017274" [a2fbc310-b3d1-401a-970e-c4a22db898e5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:10:44.601942  317625 system_pods.go:61] "kindnet-tdtxm" [36fa8db0-2ffe-4766-b136-fc7ef839dfab] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 09:10:44.601964  317625 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-017274" [9614a86d-4fc5-47b3-aa96-a4adfa19424b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:10:44.601977  317625 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-017274" [2287dfc7-76ac-4fbb-b232-c09511cbed19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:10:44.602007  317625 system_pods.go:61] "kube-proxy-82xcl" [7881caf5-4a0e-483d-aa7d-1e777513587f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:10:44.602020  317625 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-017274" [90a03547-43ce-4036-9a92-3f5085fd62d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:10:44.602051  317625 system_pods.go:61] "storage-provisioner" [a2ec03f2-f2b6-42f9-a758-85de0d658ec3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:10:44.602062  317625 system_pods.go:74] duration metric: took 4.703797ms to wait for pod list to return data ...
	I1029 09:10:44.602076  317625 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:10:44.604749  317625 default_sa.go:45] found service account: "default"
	I1029 09:10:44.604771  317625 default_sa.go:55] duration metric: took 2.68857ms for default service account to be created ...
	I1029 09:10:44.604780  317625 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:10:44.608136  317625 system_pods.go:86] 8 kube-system pods found
	I1029 09:10:44.608169  317625 system_pods.go:89] "coredns-66bc5c9577-qtsxl" [c671126a-10b8-46ff-b868-24fb3c0c8271] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:10:44.608180  317625 system_pods.go:89] "etcd-default-k8s-diff-port-017274" [a2fbc310-b3d1-401a-970e-c4a22db898e5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:10:44.608193  317625 system_pods.go:89] "kindnet-tdtxm" [36fa8db0-2ffe-4766-b136-fc7ef839dfab] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 09:10:44.608202  317625 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-017274" [9614a86d-4fc5-47b3-aa96-a4adfa19424b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:10:44.608211  317625 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-017274" [2287dfc7-76ac-4fbb-b232-c09511cbed19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:10:44.608219  317625 system_pods.go:89] "kube-proxy-82xcl" [7881caf5-4a0e-483d-aa7d-1e777513587f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:10:44.608235  317625 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-017274" [90a03547-43ce-4036-9a92-3f5085fd62d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:10:44.608248  317625 system_pods.go:89] "storage-provisioner" [a2ec03f2-f2b6-42f9-a758-85de0d658ec3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:10:44.608258  317625 system_pods.go:126] duration metric: took 3.471244ms to wait for k8s-apps to be running ...
	I1029 09:10:44.608273  317625 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:10:44.608323  317625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:44.622522  317625 system_svc.go:56] duration metric: took 14.241486ms WaitForService to wait for kubelet
	I1029 09:10:44.622547  317625 kubeadm.go:587] duration metric: took 3.096493749s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:10:44.622564  317625 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:10:44.625775  317625 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 09:10:44.625803  317625 node_conditions.go:123] node cpu capacity is 8
	I1029 09:10:44.625819  317625 node_conditions.go:105] duration metric: took 3.250078ms to run NodePressure ...
	I1029 09:10:44.625834  317625 start.go:242] waiting for startup goroutines ...
	I1029 09:10:44.625843  317625 start.go:247] waiting for cluster config update ...
	I1029 09:10:44.625858  317625 start.go:256] writing updated cluster config ...
	I1029 09:10:44.626146  317625 ssh_runner.go:195] Run: rm -f paused
	I1029 09:10:44.630600  317625 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:10:44.634880  317625 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qtsxl" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 09:10:46.641345  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	W1029 09:10:48.641582  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	W1029 09:10:50.642286  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	W1029 09:10:53.141753  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 29 09:10:21 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:21.830663498Z" level=info msg="Started container" PID=1713 containerID=065d4e3b38e2fcbd8f186b59d39829b11ea55bdaafafe5912a82447471af87de description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2/dashboard-metrics-scraper id=22892e1f-05dc-4daf-88ac-74d2f2dc3205 name=/runtime.v1.RuntimeService/StartContainer sandboxID=54c8188b59bdbae05495b7a0bb2d513278a4aa20bfff556b2ca00fe523741af3
	Oct 29 09:10:22 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:22.781259528Z" level=info msg="Removing container: 4db58cdeb957bfd00e2ee03db0708c3f5d62266639e6245412b4fbb47e4e12dd" id=83fd0e83-5ef8-46e8-a1ac-16eba4dc63c0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:22 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:22.791411222Z" level=info msg="Removed container 4db58cdeb957bfd00e2ee03db0708c3f5d62266639e6245412b4fbb47e4e12dd: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2/dashboard-metrics-scraper" id=83fd0e83-5ef8-46e8-a1ac-16eba4dc63c0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.802769752Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=29856d36-5ecc-4986-8cfe-f6aa06661f5d name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.803751523Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4baefd6b-4128-4363-83cd-edea315daa10 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.804818511Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e219019e-f6bd-4970-9f2e-780c4f2fa05b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.804952403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.809610608Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.80984125Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ede10dac12f515b582d344523b588e252c991ddcc84e4eb8cff6773dcaed1357/merged/etc/passwd: no such file or directory"
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.809879951Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ede10dac12f515b582d344523b588e252c991ddcc84e4eb8cff6773dcaed1357/merged/etc/group: no such file or directory"
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.810211757Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.83451575Z" level=info msg="Created container 760a87238b2f69d30a492a8425720859425081870c4d1699bd2cd63e614eb1ac: kube-system/storage-provisioner/storage-provisioner" id=e219019e-f6bd-4970-9f2e-780c4f2fa05b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.835261748Z" level=info msg="Starting container: 760a87238b2f69d30a492a8425720859425081870c4d1699bd2cd63e614eb1ac" id=145cab5f-6a8b-4c93-b77f-ddccd64b7366 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:10:31 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:31.83721559Z" level=info msg="Started container" PID=1729 containerID=760a87238b2f69d30a492a8425720859425081870c4d1699bd2cd63e614eb1ac description=kube-system/storage-provisioner/storage-provisioner id=145cab5f-6a8b-4c93-b77f-ddccd64b7366 name=/runtime.v1.RuntimeService/StartContainer sandboxID=94507d456e4f70023e43b0d5e4fc64a2ff103f0bbac71d899e43d28220cdb158
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.685108332Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9bdb5ec4-915f-4fc2-a988-007e017c8573 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.686178152Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=527b285f-1fcb-4507-be26-9b5fd4049dc6 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.687202784Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2/dashboard-metrics-scraper" id=c991d8e6-10e8-4f5a-9c1f-3a294455f865 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.687358898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.693413387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.693940894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.724892815Z" level=info msg="Created container 30070075bf0d8949d74b69220a19c409ef57a25928f6cd1dc21dc144031e1f3a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2/dashboard-metrics-scraper" id=c991d8e6-10e8-4f5a-9c1f-3a294455f865 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.725650947Z" level=info msg="Starting container: 30070075bf0d8949d74b69220a19c409ef57a25928f6cd1dc21dc144031e1f3a" id=e1f80a64-6e3a-4c21-8eef-09648f99e81d name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.727562301Z" level=info msg="Started container" PID=1766 containerID=30070075bf0d8949d74b69220a19c409ef57a25928f6cd1dc21dc144031e1f3a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2/dashboard-metrics-scraper id=e1f80a64-6e3a-4c21-8eef-09648f99e81d name=/runtime.v1.RuntimeService/StartContainer sandboxID=54c8188b59bdbae05495b7a0bb2d513278a4aa20bfff556b2ca00fe523741af3
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.831192263Z" level=info msg="Removing container: 065d4e3b38e2fcbd8f186b59d39829b11ea55bdaafafe5912a82447471af87de" id=75e90672-0316-4dbf-94cb-f4c7d0c68f7f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:39 old-k8s-version-096492 crio[565]: time="2025-10-29T09:10:39.841081418Z" level=info msg="Removed container 065d4e3b38e2fcbd8f186b59d39829b11ea55bdaafafe5912a82447471af87de: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2/dashboard-metrics-scraper" id=75e90672-0316-4dbf-94cb-f4c7d0c68f7f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	30070075bf0d8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   54c8188b59bdb       dashboard-metrics-scraper-5f989dc9cf-t2wb2       kubernetes-dashboard
	760a87238b2f6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   94507d456e4f7       storage-provisioner                              kube-system
	bcd2e5c5941b1       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   55b0c5ca1243c       kubernetes-dashboard-8694d4445c-zt5m2            kubernetes-dashboard
	c487e8fe869e3       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           53 seconds ago      Running             coredns                     0                   a5c9225453bff       coredns-5dd5756b68-v5mr5                         kube-system
	c2923b1598f3b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   82d81fc36a99f       busybox                                          default
	4d9dad20289cc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   a0eb79b83d195       kindnet-7qztm                                    kube-system
	031a423a3b88f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   94507d456e4f7       storage-provisioner                              kube-system
	737af4626b5ae       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           53 seconds ago      Running             kube-proxy                  0                   69b645a5faa77       kube-proxy-8kpqf                                 kube-system
	eb75fa40098e3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           57 seconds ago      Running             etcd                        0                   c3fc3c7f4fba1       etcd-old-k8s-version-096492                      kube-system
	d92dd056da0fc       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           57 seconds ago      Running             kube-controller-manager     0                   4294f405906c3       kube-controller-manager-old-k8s-version-096492   kube-system
	f75d2e46364d0       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           57 seconds ago      Running             kube-apiserver              0                   8ac1d54dae710       kube-apiserver-old-k8s-version-096492            kube-system
	3c2ce552cdf8c       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           57 seconds ago      Running             kube-scheduler              0                   2c6d3ef5183ae       kube-scheduler-old-k8s-version-096492            kube-system
	
	
	==> coredns [c487e8fe869e3ca2313a2d3948922a35774499c95c5df3089ea171e2f4b4e5e9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:52469 - 34395 "HINFO IN 3988397917584717053.5984887629911655176. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.451363112s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-096492
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-096492
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=old-k8s-version-096492
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_08_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:08:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-096492
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:10:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:10:30 +0000   Wed, 29 Oct 2025 09:08:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:10:30 +0000   Wed, 29 Oct 2025 09:08:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:10:30 +0000   Wed, 29 Oct 2025 09:08:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:10:30 +0000   Wed, 29 Oct 2025 09:09:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-096492
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                9ea7cc04-2266-42af-af7f-14c5bd55b0ca
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-v5mr5                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-old-k8s-version-096492                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m3s
	  kube-system                 kindnet-7qztm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-096492             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-096492    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-8kpqf                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-096492             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-t2wb2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-zt5m2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 53s                    kube-proxy       
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-096492 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-096492 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-096492 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m3s                   kubelet          Node old-k8s-version-096492 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m3s                   kubelet          Node old-k8s-version-096492 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m3s                   kubelet          Node old-k8s-version-096492 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                   node-controller  Node old-k8s-version-096492 event: Registered Node old-k8s-version-096492 in Controller
	  Normal  NodeReady                98s                    kubelet          Node old-k8s-version-096492 status is now: NodeReady
	  Normal  Starting                 59s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)      kubelet          Node old-k8s-version-096492 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node old-k8s-version-096492 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)      kubelet          Node old-k8s-version-096492 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                    node-controller  Node old-k8s-version-096492 event: Registered Node old-k8s-version-096492 in Controller
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[  +7.860471] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea d1 bc 55 fa d3 08 06
	[  +0.057230] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[  +7.379065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 86 de 12 5c b8 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[Oct29 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 3e ab f0 10 3c 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 16 ea 00 8c 1d 08 06
	[  +4.650960] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 a3 db 56 0e bd 08 06
	[  +0.000357] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[ +10.158654] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	[Oct29 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 62 44 12 a2 6e 08 06
	[  +0.000472] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	
	
	==> etcd [eb75fa40098e331528b7e94c2e2a71c766423c56a220a24eaaa69f66efdce4b6] <==
	{"level":"info","ts":"2025-10-29T09:09:57.235768Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-29T09:09:57.235781Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-29T09:09:57.23623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-29T09:09:57.236439Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-29T09:09:57.236678Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:09:57.236742Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:09:57.23755Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-29T09:09:57.23766Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-29T09:09:57.237704Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-29T09:09:57.23786Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-29T09:09:57.23789Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-29T09:09:59.027587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-29T09:09:59.027634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-29T09:09:59.027696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-29T09:09:59.02771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-29T09:09:59.027723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-29T09:09:59.027733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-29T09:09:59.027746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-29T09:09:59.030073Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-096492 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-29T09:09:59.030081Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-29T09:09:59.030105Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-29T09:09:59.030402Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-29T09:09:59.030431Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-29T09:09:59.031157Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-29T09:09:59.031503Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 09:10:55 up 53 min,  0 user,  load average: 4.75, 4.10, 2.61
	Linux old-k8s-version-096492 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4d9dad20289cc57f254242baa3e4cb4a7f518fd33fd2534285dec97a5d521b07] <==
	I1029 09:10:01.349356       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:10:01.349768       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:10:01.350025       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:10:01.350086       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:10:01.350140       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:10:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:10:01.553684       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:10:01.553850       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:10:01.553866       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:10:01.554036       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:10:01.954246       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:10:01.954275       1 metrics.go:72] Registering metrics
	I1029 09:10:01.954343       1 controller.go:711] "Syncing nftables rules"
	I1029 09:10:11.555481       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:10:11.555544       1 main.go:301] handling current node
	I1029 09:10:21.554177       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:10:21.554216       1 main.go:301] handling current node
	I1029 09:10:31.554541       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:10:31.554573       1 main.go:301] handling current node
	I1029 09:10:41.554142       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:10:41.554188       1 main.go:301] handling current node
	I1029 09:10:51.558301       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1029 09:10:51.558342       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f75d2e46364d0954bc8168a45bbf13f9854e2c28802b489937d6d807e197c25c] <==
	I1029 09:10:00.091321       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1029 09:10:00.115834       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1029 09:10:00.116058       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1029 09:10:00.117773       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1029 09:10:00.117268       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1029 09:10:00.117339       1 shared_informer.go:318] Caches are synced for configmaps
	I1029 09:10:00.119893       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1029 09:10:00.119899       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1029 09:10:00.120147       1 aggregator.go:166] initial CRD sync complete...
	I1029 09:10:00.120164       1 autoregister_controller.go:141] Starting autoregister controller
	I1029 09:10:00.120171       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:10:00.120179       1 cache.go:39] Caches are synced for autoregister controller
	E1029 09:10:00.123260       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 09:10:00.153491       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:10:01.022403       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:10:01.084288       1 controller.go:624] quota admission added evaluator for: namespaces
	I1029 09:10:01.133006       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1029 09:10:01.158333       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:10:01.171727       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:10:01.184059       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1029 09:10:01.228793       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.157.13"}
	I1029 09:10:01.241876       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.49.76"}
	I1029 09:10:13.058436       1 controller.go:624] quota admission added evaluator for: endpoints
	I1029 09:10:13.099381       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1029 09:10:13.123521       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d92dd056da0fc02c82efa03b27cf291df638b891640d4514b8dea24f11e44842] <==
	I1029 09:10:13.148190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="34.581111ms"
	I1029 09:10:13.160193       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.874212ms"
	I1029 09:10:13.160308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.795µs"
	I1029 09:10:13.178393       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="41.49956ms"
	I1029 09:10:13.178621       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="85.097µs"
	I1029 09:10:13.180664       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1029 09:10:13.180998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="66.338µs"
	I1029 09:10:13.195431       1 shared_informer.go:318] Caches are synced for cronjob
	I1029 09:10:13.218610       1 shared_informer.go:318] Caches are synced for resource quota
	I1029 09:10:13.245768       1 shared_informer.go:318] Caches are synced for service account
	I1029 09:10:13.263496       1 shared_informer.go:318] Caches are synced for resource quota
	I1029 09:10:13.266807       1 shared_informer.go:318] Caches are synced for job
	I1029 09:10:13.312179       1 shared_informer.go:318] Caches are synced for namespace
	I1029 09:10:13.621439       1 shared_informer.go:318] Caches are synced for garbage collector
	I1029 09:10:13.621469       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1029 09:10:13.645777       1 shared_informer.go:318] Caches are synced for garbage collector
	I1029 09:10:19.796362       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.781253ms"
	I1029 09:10:19.797975       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="106.452µs"
	I1029 09:10:21.787447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.624µs"
	I1029 09:10:22.792554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="89.09µs"
	I1029 09:10:23.794195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.703µs"
	I1029 09:10:36.199687       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.094396ms"
	I1029 09:10:36.199813       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.471µs"
	I1029 09:10:39.842472       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.144µs"
	I1029 09:10:43.461303       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="89.751µs"
	
	
	==> kube-proxy [737af4626b5ae3892122d80b6d43829693d67087a126e1edab5cb80129fc0b89] <==
	I1029 09:10:01.126763       1 server_others.go:69] "Using iptables proxy"
	I1029 09:10:01.138282       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1029 09:10:01.168158       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:10:01.171074       1 server_others.go:152] "Using iptables Proxier"
	I1029 09:10:01.171112       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1029 09:10:01.171118       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1029 09:10:01.171154       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1029 09:10:01.171442       1 server.go:846] "Version info" version="v1.28.0"
	I1029 09:10:01.171464       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:10:01.172405       1 config.go:188] "Starting service config controller"
	I1029 09:10:01.172447       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1029 09:10:01.172446       1 config.go:97] "Starting endpoint slice config controller"
	I1029 09:10:01.172543       1 config.go:315] "Starting node config controller"
	I1029 09:10:01.172583       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1029 09:10:01.172600       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1029 09:10:01.273260       1 shared_informer.go:318] Caches are synced for service config
	I1029 09:10:01.273261       1 shared_informer.go:318] Caches are synced for node config
	I1029 09:10:01.274535       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3c2ce552cdf8c320285c2bb9f072826ac4a862ddc09798713d1491913854ccfa] <==
	I1029 09:09:57.584628       1 serving.go:348] Generated self-signed cert in-memory
	I1029 09:10:00.105286       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1029 09:10:00.105320       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:10:00.109956       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:10:00.109987       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1029 09:10:00.109960       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1029 09:10:00.110075       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1029 09:10:00.110071       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:10:00.110129       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1029 09:10:00.111215       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1029 09:10:00.111307       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1029 09:10:00.211147       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1029 09:10:00.211155       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1029 09:10:00.211169       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Oct 29 09:10:13 old-k8s-version-096492 kubelet[716]: I1029 09:10:13.146553     716 topology_manager.go:215] "Topology Admit Handler" podUID="f46f157e-bc03-44ee-8351-6e8f3b4da48e" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-zt5m2"
	Oct 29 09:10:13 old-k8s-version-096492 kubelet[716]: I1029 09:10:13.323248     716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9lmn\" (UniqueName: \"kubernetes.io/projected/f4193758-8c8c-48ab-8c47-8fab422e6ad2-kube-api-access-f9lmn\") pod \"dashboard-metrics-scraper-5f989dc9cf-t2wb2\" (UID: \"f4193758-8c8c-48ab-8c47-8fab422e6ad2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2"
	Oct 29 09:10:13 old-k8s-version-096492 kubelet[716]: I1029 09:10:13.323331     716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f46f157e-bc03-44ee-8351-6e8f3b4da48e-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-zt5m2\" (UID: \"f46f157e-bc03-44ee-8351-6e8f3b4da48e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-zt5m2"
	Oct 29 09:10:13 old-k8s-version-096492 kubelet[716]: I1029 09:10:13.323478     716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f4193758-8c8c-48ab-8c47-8fab422e6ad2-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-t2wb2\" (UID: \"f4193758-8c8c-48ab-8c47-8fab422e6ad2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2"
	Oct 29 09:10:13 old-k8s-version-096492 kubelet[716]: I1029 09:10:13.323574     716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjwcf\" (UniqueName: \"kubernetes.io/projected/f46f157e-bc03-44ee-8351-6e8f3b4da48e-kube-api-access-fjwcf\") pod \"kubernetes-dashboard-8694d4445c-zt5m2\" (UID: \"f46f157e-bc03-44ee-8351-6e8f3b4da48e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-zt5m2"
	Oct 29 09:10:21 old-k8s-version-096492 kubelet[716]: I1029 09:10:21.775050     716 scope.go:117] "RemoveContainer" containerID="4db58cdeb957bfd00e2ee03db0708c3f5d62266639e6245412b4fbb47e4e12dd"
	Oct 29 09:10:21 old-k8s-version-096492 kubelet[716]: I1029 09:10:21.787438     716 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-zt5m2" podStartSLOduration=2.95518771 podCreationTimestamp="2025-10-29 09:10:13 +0000 UTC" firstStartedPulling="2025-10-29 09:10:13.574721443 +0000 UTC m=+16.999034267" lastFinishedPulling="2025-10-29 09:10:19.406901181 +0000 UTC m=+22.831214009" observedRunningTime="2025-10-29 09:10:19.788690098 +0000 UTC m=+23.213002935" watchObservedRunningTime="2025-10-29 09:10:21.787367452 +0000 UTC m=+25.211680298"
	Oct 29 09:10:22 old-k8s-version-096492 kubelet[716]: I1029 09:10:22.779708     716 scope.go:117] "RemoveContainer" containerID="4db58cdeb957bfd00e2ee03db0708c3f5d62266639e6245412b4fbb47e4e12dd"
	Oct 29 09:10:22 old-k8s-version-096492 kubelet[716]: I1029 09:10:22.779886     716 scope.go:117] "RemoveContainer" containerID="065d4e3b38e2fcbd8f186b59d39829b11ea55bdaafafe5912a82447471af87de"
	Oct 29 09:10:22 old-k8s-version-096492 kubelet[716]: E1029 09:10:22.780348     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t2wb2_kubernetes-dashboard(f4193758-8c8c-48ab-8c47-8fab422e6ad2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2" podUID="f4193758-8c8c-48ab-8c47-8fab422e6ad2"
	Oct 29 09:10:23 old-k8s-version-096492 kubelet[716]: I1029 09:10:23.784230     716 scope.go:117] "RemoveContainer" containerID="065d4e3b38e2fcbd8f186b59d39829b11ea55bdaafafe5912a82447471af87de"
	Oct 29 09:10:23 old-k8s-version-096492 kubelet[716]: E1029 09:10:23.784575     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t2wb2_kubernetes-dashboard(f4193758-8c8c-48ab-8c47-8fab422e6ad2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2" podUID="f4193758-8c8c-48ab-8c47-8fab422e6ad2"
	Oct 29 09:10:24 old-k8s-version-096492 kubelet[716]: I1029 09:10:24.786026     716 scope.go:117] "RemoveContainer" containerID="065d4e3b38e2fcbd8f186b59d39829b11ea55bdaafafe5912a82447471af87de"
	Oct 29 09:10:24 old-k8s-version-096492 kubelet[716]: E1029 09:10:24.786315     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t2wb2_kubernetes-dashboard(f4193758-8c8c-48ab-8c47-8fab422e6ad2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2" podUID="f4193758-8c8c-48ab-8c47-8fab422e6ad2"
	Oct 29 09:10:31 old-k8s-version-096492 kubelet[716]: I1029 09:10:31.802215     716 scope.go:117] "RemoveContainer" containerID="031a423a3b88f15d8793b324f402b6d66cad2fce5d425423baf97566df02d968"
	Oct 29 09:10:39 old-k8s-version-096492 kubelet[716]: I1029 09:10:39.684399     716 scope.go:117] "RemoveContainer" containerID="065d4e3b38e2fcbd8f186b59d39829b11ea55bdaafafe5912a82447471af87de"
	Oct 29 09:10:39 old-k8s-version-096492 kubelet[716]: I1029 09:10:39.828584     716 scope.go:117] "RemoveContainer" containerID="065d4e3b38e2fcbd8f186b59d39829b11ea55bdaafafe5912a82447471af87de"
	Oct 29 09:10:39 old-k8s-version-096492 kubelet[716]: I1029 09:10:39.828896     716 scope.go:117] "RemoveContainer" containerID="30070075bf0d8949d74b69220a19c409ef57a25928f6cd1dc21dc144031e1f3a"
	Oct 29 09:10:39 old-k8s-version-096492 kubelet[716]: E1029 09:10:39.829254     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t2wb2_kubernetes-dashboard(f4193758-8c8c-48ab-8c47-8fab422e6ad2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2" podUID="f4193758-8c8c-48ab-8c47-8fab422e6ad2"
	Oct 29 09:10:43 old-k8s-version-096492 kubelet[716]: I1029 09:10:43.448598     716 scope.go:117] "RemoveContainer" containerID="30070075bf0d8949d74b69220a19c409ef57a25928f6cd1dc21dc144031e1f3a"
	Oct 29 09:10:43 old-k8s-version-096492 kubelet[716]: E1029 09:10:43.449018     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t2wb2_kubernetes-dashboard(f4193758-8c8c-48ab-8c47-8fab422e6ad2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t2wb2" podUID="f4193758-8c8c-48ab-8c47-8fab422e6ad2"
	Oct 29 09:10:50 old-k8s-version-096492 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:10:50 old-k8s-version-096492 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:10:50 old-k8s-version-096492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 29 09:10:50 old-k8s-version-096492 systemd[1]: kubelet.service: Consumed 1.627s CPU time.
	
	
	==> kubernetes-dashboard [bcd2e5c5941b1ea800a416d3424313ba461f9e10fec33ea99dbad02c9f819245] <==
	2025/10/29 09:10:19 Using namespace: kubernetes-dashboard
	2025/10/29 09:10:19 Using in-cluster config to connect to apiserver
	2025/10/29 09:10:19 Using secret token for csrf signing
	2025/10/29 09:10:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 09:10:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 09:10:19 Successful initial request to the apiserver, version: v1.28.0
	2025/10/29 09:10:19 Generating JWE encryption key
	2025/10/29 09:10:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 09:10:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 09:10:19 Initializing JWE encryption key from synchronized object
	2025/10/29 09:10:19 Creating in-cluster Sidecar client
	2025/10/29 09:10:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:10:19 Serving insecurely on HTTP port: 9090
	2025/10/29 09:10:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:10:19 Starting overwatch
	
	
	==> storage-provisioner [031a423a3b88f15d8793b324f402b6d66cad2fce5d425423baf97566df02d968] <==
	I1029 09:10:01.085119       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1029 09:10:31.090332       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [760a87238b2f69d30a492a8425720859425081870c4d1699bd2cd63e614eb1ac] <==
	I1029 09:10:31.849232       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:10:31.857442       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:10:31.857490       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1029 09:10:49.257012       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:10:49.257136       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e7c8c98-8fd4-43b1-9dc7-61c97a398c0b", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-096492_fbf225a0-d214-4fb3-a439-f1ccb5621b69 became leader
	I1029 09:10:49.257219       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-096492_fbf225a0-d214-4fb3-a439-f1ccb5621b69!
	I1029 09:10:49.357980       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-096492_fbf225a0-d214-4fb3-a439-f1ccb5621b69!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-096492 -n old-k8s-version-096492
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-096492 -n old-k8s-version-096492: exit status 2 (340.791339ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-096492 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-834228 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-834228 --alsologtostderr -v=1: exit status 80 (2.394439873s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-834228 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:10:56.701002  322297 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:10:56.701289  322297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:56.701302  322297 out.go:374] Setting ErrFile to fd 2...
	I1029 09:10:56.701309  322297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:56.701611  322297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:10:56.701958  322297 out.go:368] Setting JSON to false
	I1029 09:10:56.702014  322297 mustload.go:66] Loading cluster: embed-certs-834228
	I1029 09:10:56.702497  322297 config.go:182] Loaded profile config "embed-certs-834228": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:56.703086  322297 cli_runner.go:164] Run: docker container inspect embed-certs-834228 --format={{.State.Status}}
	I1029 09:10:56.722946  322297 host.go:66] Checking if "embed-certs-834228" exists ...
	I1029 09:10:56.723252  322297 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:56.781644  322297 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-29 09:10:56.7706293 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:56.782413  322297 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-834228 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1029 09:10:56.784421  322297 out.go:179] * Pausing node embed-certs-834228 ... 
	I1029 09:10:56.785728  322297 host.go:66] Checking if "embed-certs-834228" exists ...
	I1029 09:10:56.786126  322297 ssh_runner.go:195] Run: systemctl --version
	I1029 09:10:56.786178  322297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834228
	I1029 09:10:56.807508  322297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/embed-certs-834228/id_rsa Username:docker}
	I1029 09:10:56.908359  322297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:56.940946  322297 pause.go:52] kubelet running: true
	I1029 09:10:56.941034  322297 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:10:57.097327  322297 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:10:57.097432  322297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:10:57.176484  322297 cri.go:89] found id: "99394d937439462468398408a13c0dbc12c367c545eed7771a9818856c9c2fe4"
	I1029 09:10:57.176504  322297 cri.go:89] found id: "6059afdcbebe5ca0a661823dd7be248fa4f6a9c15ea88db99e03727fb7d8a75e"
	I1029 09:10:57.176508  322297 cri.go:89] found id: "4b8003b860d3d3cd1d5902b956a476ee7b96cbf48d1a25c4f3a01fd291cac8ae"
	I1029 09:10:57.176520  322297 cri.go:89] found id: "f5bd47442b5f7f9b7a37ce50069bed596566fa194f52aaa92883fd90301afcf4"
	I1029 09:10:57.176523  322297 cri.go:89] found id: "e17241a0d1168446f12fd7f52847d34e2e3d87b159fe20668c7e1f2d7cdefe80"
	I1029 09:10:57.176526  322297 cri.go:89] found id: "66aa912baa9af98f94ecb5b252508da4dacdaa895aab155c9bbd90f2b07a6d67"
	I1029 09:10:57.176528  322297 cri.go:89] found id: "b1e012893324df4a540415d2a2a886bc9306d87f1be54870a37e70562f009661"
	I1029 09:10:57.176531  322297 cri.go:89] found id: "0d384ad349a4f9e1f45c716a0c367d307afd1e671eccf883335f5764690e871a"
	I1029 09:10:57.176533  322297 cri.go:89] found id: "f516353885ecbd2eb5072fd9daac8f0cc0f088a1992d0c02fe4ca4ec5d2f5393"
	I1029 09:10:57.176539  322297 cri.go:89] found id: "0fe34722ad970790d7f290f400d34568b18937464fe5d1e524c48438c7d600fc"
	I1029 09:10:57.176542  322297 cri.go:89] found id: "78b8db7671738ad247acc38622873834d4d545afb33749490f066640abb90793"
	I1029 09:10:57.176544  322297 cri.go:89] found id: ""
	I1029 09:10:57.176599  322297 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:10:57.189668  322297 retry.go:31] will retry after 149.398483ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:57Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:10:57.340075  322297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:57.353843  322297 pause.go:52] kubelet running: false
	I1029 09:10:57.353896  322297 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:10:57.519183  322297 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:10:57.519268  322297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:10:57.601148  322297 cri.go:89] found id: "99394d937439462468398408a13c0dbc12c367c545eed7771a9818856c9c2fe4"
	I1029 09:10:57.601174  322297 cri.go:89] found id: "6059afdcbebe5ca0a661823dd7be248fa4f6a9c15ea88db99e03727fb7d8a75e"
	I1029 09:10:57.601180  322297 cri.go:89] found id: "4b8003b860d3d3cd1d5902b956a476ee7b96cbf48d1a25c4f3a01fd291cac8ae"
	I1029 09:10:57.601186  322297 cri.go:89] found id: "f5bd47442b5f7f9b7a37ce50069bed596566fa194f52aaa92883fd90301afcf4"
	I1029 09:10:57.601190  322297 cri.go:89] found id: "e17241a0d1168446f12fd7f52847d34e2e3d87b159fe20668c7e1f2d7cdefe80"
	I1029 09:10:57.601195  322297 cri.go:89] found id: "66aa912baa9af98f94ecb5b252508da4dacdaa895aab155c9bbd90f2b07a6d67"
	I1029 09:10:57.601199  322297 cri.go:89] found id: "b1e012893324df4a540415d2a2a886bc9306d87f1be54870a37e70562f009661"
	I1029 09:10:57.601202  322297 cri.go:89] found id: "0d384ad349a4f9e1f45c716a0c367d307afd1e671eccf883335f5764690e871a"
	I1029 09:10:57.601205  322297 cri.go:89] found id: "f516353885ecbd2eb5072fd9daac8f0cc0f088a1992d0c02fe4ca4ec5d2f5393"
	I1029 09:10:57.601216  322297 cri.go:89] found id: "0fe34722ad970790d7f290f400d34568b18937464fe5d1e524c48438c7d600fc"
	I1029 09:10:57.601222  322297 cri.go:89] found id: "78b8db7671738ad247acc38622873834d4d545afb33749490f066640abb90793"
	I1029 09:10:57.601224  322297 cri.go:89] found id: ""
	I1029 09:10:57.601265  322297 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:10:57.613755  322297 retry.go:31] will retry after 542.951131ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:57Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:10:58.157209  322297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:58.171811  322297 pause.go:52] kubelet running: false
	I1029 09:10:58.171870  322297 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:10:58.336575  322297 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:10:58.336681  322297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:10:58.415825  322297 cri.go:89] found id: "99394d937439462468398408a13c0dbc12c367c545eed7771a9818856c9c2fe4"
	I1029 09:10:58.415851  322297 cri.go:89] found id: "6059afdcbebe5ca0a661823dd7be248fa4f6a9c15ea88db99e03727fb7d8a75e"
	I1029 09:10:58.415857  322297 cri.go:89] found id: "4b8003b860d3d3cd1d5902b956a476ee7b96cbf48d1a25c4f3a01fd291cac8ae"
	I1029 09:10:58.415861  322297 cri.go:89] found id: "f5bd47442b5f7f9b7a37ce50069bed596566fa194f52aaa92883fd90301afcf4"
	I1029 09:10:58.415866  322297 cri.go:89] found id: "e17241a0d1168446f12fd7f52847d34e2e3d87b159fe20668c7e1f2d7cdefe80"
	I1029 09:10:58.415870  322297 cri.go:89] found id: "66aa912baa9af98f94ecb5b252508da4dacdaa895aab155c9bbd90f2b07a6d67"
	I1029 09:10:58.415874  322297 cri.go:89] found id: "b1e012893324df4a540415d2a2a886bc9306d87f1be54870a37e70562f009661"
	I1029 09:10:58.415878  322297 cri.go:89] found id: "0d384ad349a4f9e1f45c716a0c367d307afd1e671eccf883335f5764690e871a"
	I1029 09:10:58.415882  322297 cri.go:89] found id: "f516353885ecbd2eb5072fd9daac8f0cc0f088a1992d0c02fe4ca4ec5d2f5393"
	I1029 09:10:58.415891  322297 cri.go:89] found id: "0fe34722ad970790d7f290f400d34568b18937464fe5d1e524c48438c7d600fc"
	I1029 09:10:58.415895  322297 cri.go:89] found id: "78b8db7671738ad247acc38622873834d4d545afb33749490f066640abb90793"
	I1029 09:10:58.415899  322297 cri.go:89] found id: ""
	I1029 09:10:58.415935  322297 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:10:58.428747  322297 retry.go:31] will retry after 326.167105ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:58Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:10:58.756094  322297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:58.770101  322297 pause.go:52] kubelet running: false
	I1029 09:10:58.770151  322297 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:10:58.921502  322297 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:10:58.921588  322297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:10:59.000447  322297 cri.go:89] found id: "99394d937439462468398408a13c0dbc12c367c545eed7771a9818856c9c2fe4"
	I1029 09:10:59.000473  322297 cri.go:89] found id: "6059afdcbebe5ca0a661823dd7be248fa4f6a9c15ea88db99e03727fb7d8a75e"
	I1029 09:10:59.000479  322297 cri.go:89] found id: "4b8003b860d3d3cd1d5902b956a476ee7b96cbf48d1a25c4f3a01fd291cac8ae"
	I1029 09:10:59.000484  322297 cri.go:89] found id: "f5bd47442b5f7f9b7a37ce50069bed596566fa194f52aaa92883fd90301afcf4"
	I1029 09:10:59.000488  322297 cri.go:89] found id: "e17241a0d1168446f12fd7f52847d34e2e3d87b159fe20668c7e1f2d7cdefe80"
	I1029 09:10:59.000494  322297 cri.go:89] found id: "66aa912baa9af98f94ecb5b252508da4dacdaa895aab155c9bbd90f2b07a6d67"
	I1029 09:10:59.000498  322297 cri.go:89] found id: "b1e012893324df4a540415d2a2a886bc9306d87f1be54870a37e70562f009661"
	I1029 09:10:59.000502  322297 cri.go:89] found id: "0d384ad349a4f9e1f45c716a0c367d307afd1e671eccf883335f5764690e871a"
	I1029 09:10:59.000507  322297 cri.go:89] found id: "f516353885ecbd2eb5072fd9daac8f0cc0f088a1992d0c02fe4ca4ec5d2f5393"
	I1029 09:10:59.000514  322297 cri.go:89] found id: "0fe34722ad970790d7f290f400d34568b18937464fe5d1e524c48438c7d600fc"
	I1029 09:10:59.000518  322297 cri.go:89] found id: "78b8db7671738ad247acc38622873834d4d545afb33749490f066640abb90793"
	I1029 09:10:59.000523  322297 cri.go:89] found id: ""
	I1029 09:10:59.000570  322297 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:10:59.023696  322297 out.go:203] 
	W1029 09:10:59.025100  322297 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 09:10:59.025124  322297 out.go:285] * 
	* 
	W1029 09:10:59.030480  322297 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 09:10:59.032047  322297 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-834228 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-834228
helpers_test.go:243: (dbg) docker inspect embed-certs-834228:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1",
	        "Created": "2025-10-29T09:08:47.072061223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 310541,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:09:55.954614638Z",
	            "FinishedAt": "2025-10-29T09:09:54.978539021Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1/hostname",
	        "HostsPath": "/var/lib/docker/containers/078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1/hosts",
	        "LogPath": "/var/lib/docker/containers/078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1/078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1-json.log",
	        "Name": "/embed-certs-834228",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-834228:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-834228",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1",
	                "LowerDir": "/var/lib/docker/overlay2/7acb3c11d92fdc34d3b3d58e00654a2a17b5843585f3e9de7e99b9f5cf5070f4-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7acb3c11d92fdc34d3b3d58e00654a2a17b5843585f3e9de7e99b9f5cf5070f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7acb3c11d92fdc34d3b3d58e00654a2a17b5843585f3e9de7e99b9f5cf5070f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7acb3c11d92fdc34d3b3d58e00654a2a17b5843585f3e9de7e99b9f5cf5070f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-834228",
	                "Source": "/var/lib/docker/volumes/embed-certs-834228/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-834228",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-834228",
	                "name.minikube.sigs.k8s.io": "embed-certs-834228",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f837e4bbb0389d8aa93b50ccf39c406c8469db17210f2e441b5708b8229a276f",
	            "SandboxKey": "/var/run/docker/netns/f837e4bbb038",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-834228": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:49:7f:1e:e8:4e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "86d19029abe0aa5f7ddaf42bf75485455d5c473387cb83ef6c0d4c78851e1205",
	                    "EndpointID": "926ba760a71646fffe60727e536bdd963b828ab3f098ec718803cf88fb5aec60",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-834228",
	                        "078bf67023c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-834228 -n embed-certs-834228
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-834228 -n embed-certs-834228: exit status 2 (378.15359ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-834228 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-834228 logs -n 25: (1.340775862s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p old-k8s-version-096492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p old-k8s-version-096492 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ addons  │ enable metrics-server -p embed-certs-834228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-043790 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p embed-certs-834228 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ stop    │ -p no-preload-043790 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-096492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p old-k8s-version-096492 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p embed-certs-834228 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p embed-certs-834228 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p no-preload-043790 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p no-preload-043790 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-017274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-017274 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-017274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ image   │ old-k8s-version-096492 image list --format=json                                                                                                                                                                                               │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p old-k8s-version-096492 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ image   │ embed-certs-834228 image list --format=json                                                                                                                                                                                                   │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p embed-certs-834228 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ image   │ no-preload-043790 image list --format=json                                                                                                                                                                                                    │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p no-preload-043790 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p newest-cni-259430 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:10:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:10:59.394267  323285 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:10:59.394622  323285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:59.394635  323285 out.go:374] Setting ErrFile to fd 2...
	I1029 09:10:59.394640  323285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:59.394949  323285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:10:59.395669  323285 out.go:368] Setting JSON to false
	I1029 09:10:59.397426  323285 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3207,"bootTime":1761725852,"procs":353,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:10:59.397490  323285 start.go:143] virtualization: kvm guest
	I1029 09:10:59.399709  323285 out.go:179] * [newest-cni-259430] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:10:59.401275  323285 notify.go:221] Checking for updates...
	I1029 09:10:59.401303  323285 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:10:59.402811  323285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:10:59.404227  323285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:59.405575  323285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:10:59.406888  323285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:10:59.408222  323285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:10:59.410015  323285 config.go:182] Loaded profile config "default-k8s-diff-port-017274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410148  323285 config.go:182] Loaded profile config "embed-certs-834228": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410263  323285 config.go:182] Loaded profile config "no-preload-043790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410378  323285 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:10:59.435730  323285 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:10:59.435827  323285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:59.503060  323285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-29 09:10:59.489541208 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:59.503182  323285 docker.go:319] overlay module found
	I1029 09:10:59.505269  323285 out.go:179] * Using the docker driver based on user configuration
	
	
	==> CRI-O <==
	Oct 29 09:10:17 embed-certs-834228 crio[560]: time="2025-10-29T09:10:17.817081997Z" level=info msg="Started container" PID=1722 containerID=9ded5dc5462bddc38938c5b7bf5e838234574f90c31cf22183d27cb37f92d68b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b/dashboard-metrics-scraper id=90b8d865-7bad-44bb-a330-61ca98ca8c50 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f25e0d9988718bd3575e4611b9d09b38dc806d77f48c5eb63b00738f084e6f8f
	Oct 29 09:10:18 embed-certs-834228 crio[560]: time="2025-10-29T09:10:18.765478878Z" level=info msg="Removing container: 2bf754951b4ba841429d1416789ba3bb24d18205f9354ded450d8814d4a5f762" id=8ff5f0a7-f4f5-4d4d-8d5d-2ff03ce787ee name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:18 embed-certs-834228 crio[560]: time="2025-10-29T09:10:18.825815323Z" level=info msg="Removed container 2bf754951b4ba841429d1416789ba3bb24d18205f9354ded450d8814d4a5f762: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b/dashboard-metrics-scraper" id=8ff5f0a7-f4f5-4d4d-8d5d-2ff03ce787ee name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.643130293Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=606b2f7e-d8ec-4e75-843b-f98757f52722 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.644235962Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=898386f4-a63f-467d-98b4-18683e2e966b name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.645310506Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b/dashboard-metrics-scraper" id=5b1d2342-87c3-4a0e-bacc-8bccaf00d00d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.645461148Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.652466447Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.653168471Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.681867152Z" level=info msg="Created container 0fe34722ad970790d7f290f400d34568b18937464fe5d1e524c48438c7d600fc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b/dashboard-metrics-scraper" id=5b1d2342-87c3-4a0e-bacc-8bccaf00d00d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.682548372Z" level=info msg="Starting container: 0fe34722ad970790d7f290f400d34568b18937464fe5d1e524c48438c7d600fc" id=046724c6-3997-4b9e-9cac-060b276239a1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.684660769Z" level=info msg="Started container" PID=1732 containerID=0fe34722ad970790d7f290f400d34568b18937464fe5d1e524c48438c7d600fc description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b/dashboard-metrics-scraper id=046724c6-3997-4b9e-9cac-060b276239a1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f25e0d9988718bd3575e4611b9d09b38dc806d77f48c5eb63b00738f084e6f8f
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.806566254Z" level=info msg="Removing container: 9ded5dc5462bddc38938c5b7bf5e838234574f90c31cf22183d27cb37f92d68b" id=f10d12b2-eeed-4780-aa8d-cbf86a148dac name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.817584364Z" level=info msg="Removed container 9ded5dc5462bddc38938c5b7bf5e838234574f90c31cf22183d27cb37f92d68b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b/dashboard-metrics-scraper" id=f10d12b2-eeed-4780-aa8d-cbf86a148dac name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.811805353Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9404538b-6d72-4829-9693-80601c551011 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.81283944Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8efe9e22-cb7d-4e8d-aa88-e050be29fb3c name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.813987112Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e5ee6279-b769-4bf9-b196-6327365c4fc5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.814169214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.818874793Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.819083049Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c2905fc6c5acbf5c484b443007d3b497a12ec652272d105ef408882ca993750e/merged/etc/passwd: no such file or directory"
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.819116231Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c2905fc6c5acbf5c484b443007d3b497a12ec652272d105ef408882ca993750e/merged/etc/group: no such file or directory"
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.819415843Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.847311314Z" level=info msg="Created container 99394d937439462468398408a13c0dbc12c367c545eed7771a9818856c9c2fe4: kube-system/storage-provisioner/storage-provisioner" id=e5ee6279-b769-4bf9-b196-6327365c4fc5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.84801127Z" level=info msg="Starting container: 99394d937439462468398408a13c0dbc12c367c545eed7771a9818856c9c2fe4" id=277021ed-9b0c-48ef-ae4f-833e64c69bfa name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.850093501Z" level=info msg="Started container" PID=1746 containerID=99394d937439462468398408a13c0dbc12c367c545eed7771a9818856c9c2fe4 description=kube-system/storage-provisioner/storage-provisioner id=277021ed-9b0c-48ef-ae4f-833e64c69bfa name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9efc6f74338a7d12a3c9fb968b2a6f6af2a2ad198a45e2ad041f85efee0e8fc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	99394d9374394       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   c9efc6f74338a       storage-provisioner                          kube-system
	0fe34722ad970       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   f25e0d9988718       dashboard-metrics-scraper-6ffb444bf9-l5d9b   kubernetes-dashboard
	78b8db7671738       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   803866a2a7b02       kubernetes-dashboard-855c9754f9-c42hl        kubernetes-dashboard
	6059afdcbebe5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   4e934fd2aa907       coredns-66bc5c9577-w9vf6                     kube-system
	016253316a2e4       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   5528cdd9c56bb       busybox                                      default
	4b8003b860d3d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   c9efc6f74338a       storage-provisioner                          kube-system
	f5bd47442b5f7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   f69d5ba3c7c6c       kube-proxy-bxthb                             kube-system
	e17241a0d1168       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   59837b14e23d1       kindnet-dgkfz                                kube-system
	66aa912baa9af       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   cece7a64b45f4       kube-controller-manager-embed-certs-834228   kube-system
	b1e012893324d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   7256ed6272871       etcd-embed-certs-834228                      kube-system
	0d384ad349a4f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   4da5cf70ed7d2       kube-scheduler-embed-certs-834228            kube-system
	f516353885ecb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   7abf27d9ba477       kube-apiserver-embed-certs-834228            kube-system
	
	
	==> coredns [6059afdcbebe5ca0a661823dd7be248fa4f6a9c15ea88db99e03727fb7d8a75e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60698 - 49364 "HINFO IN 2456188340904353391.2179157063304168297. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.064360292s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-834228
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-834228
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=embed-certs-834228
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_09_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:09:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-834228
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:10:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:10:36 +0000   Wed, 29 Oct 2025 09:09:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:10:36 +0000   Wed, 29 Oct 2025 09:09:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:10:36 +0000   Wed, 29 Oct 2025 09:09:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:10:36 +0000   Wed, 29 Oct 2025 09:09:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-834228
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                d617b188-ae12-430b-83d6-e9ef5bc4858e
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-w9vf6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-embed-certs-834228                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-dgkfz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-834228             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-834228    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-bxthb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-834228             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-l5d9b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-c42hl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node embed-certs-834228 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node embed-certs-834228 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node embed-certs-834228 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node embed-certs-834228 event: Registered Node embed-certs-834228 in Controller
	  Normal  NodeReady                96s                kubelet          Node embed-certs-834228 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node embed-certs-834228 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node embed-certs-834228 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node embed-certs-834228 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node embed-certs-834228 event: Registered Node embed-certs-834228 in Controller
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[  +7.860471] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea d1 bc 55 fa d3 08 06
	[  +0.057230] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[  +7.379065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 86 de 12 5c b8 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[Oct29 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 3e ab f0 10 3c 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 16 ea 00 8c 1d 08 06
	[  +4.650960] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 a3 db 56 0e bd 08 06
	[  +0.000357] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[ +10.158654] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	[Oct29 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 62 44 12 a2 6e 08 06
	[  +0.000472] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	
	
	==> etcd [b1e012893324df4a540415d2a2a886bc9306d87f1be54870a37e70562f009661] <==
	{"level":"warn","ts":"2025-10-29T09:10:04.752127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.763289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.772533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.781788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.790583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.804285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.813862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.824746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.837814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.850287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.862558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.874553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.886305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.893580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.903608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.912418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.920923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.933209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.939301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.949501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.956540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.975700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.985525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.994188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.051705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45216","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:11:00 up 53 min,  0 user,  load average: 5.89, 4.35, 2.70
	Linux embed-certs-834228 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e17241a0d1168446f12fd7f52847d34e2e3d87b159fe20668c7e1f2d7cdefe80] <==
	I1029 09:10:06.306662       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:10:06.307147       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1029 09:10:06.307337       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:10:06.307353       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:10:06.307373       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:10:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:10:06.510822       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:10:06.511817       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:10:06.511864       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:10:06.512043       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:10:06.912683       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:10:06.912709       1 metrics.go:72] Registering metrics
	I1029 09:10:06.912767       1 controller.go:711] "Syncing nftables rules"
	I1029 09:10:16.511117       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:10:16.511184       1 main.go:301] handling current node
	I1029 09:10:26.510841       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:10:26.510924       1 main.go:301] handling current node
	I1029 09:10:36.510673       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:10:36.510727       1 main.go:301] handling current node
	I1029 09:10:46.510840       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:10:46.510873       1 main.go:301] handling current node
	I1029 09:10:56.519834       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:10:56.519870       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f516353885ecbd2eb5072fd9daac8f0cc0f088a1992d0c02fe4ca4ec5d2f5393] <==
	I1029 09:10:05.631492       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:10:05.632480       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1029 09:10:05.638833       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 09:10:05.640015       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 09:10:05.643039       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:10:05.643061       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1029 09:10:05.647098       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:10:05.647528       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:10:05.647565       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:10:05.647592       1 cache.go:39] Caches are synced for autoregister controller
	E1029 09:10:05.671689       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 09:10:05.680335       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:10:05.682533       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1029 09:10:05.729588       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1029 09:10:05.736783       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:10:05.981904       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:10:06.037547       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:10:06.085335       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:10:06.098722       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:10:06.149963       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.141.180"}
	I1029 09:10:06.164549       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.73.29"}
	I1029 09:10:06.537700       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:10:09.086257       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:10:09.285723       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:10:09.636038       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [66aa912baa9af98f94ecb5b252508da4dacdaa895aab155c9bbd90f2b07a6d67] <==
	I1029 09:10:09.005511       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:10:09.007849       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 09:10:09.010466       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:10:09.012816       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:10:09.015110       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1029 09:10:09.023375       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1029 09:10:09.025749       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1029 09:10:09.031293       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 09:10:09.031425       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 09:10:09.031498       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:10:09.031510       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:10:09.031519       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:10:09.031965       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1029 09:10:09.033002       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:10:09.033024       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1029 09:10:09.033038       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1029 09:10:09.033071       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:10:09.033092       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1029 09:10:09.034430       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 09:10:09.034580       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 09:10:09.034640       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-834228"
	I1029 09:10:09.034682       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1029 09:10:09.037731       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:10:09.051604       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:10:09.059837       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f5bd47442b5f7f9b7a37ce50069bed596566fa194f52aaa92883fd90301afcf4] <==
	I1029 09:10:06.138963       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:10:06.235387       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:10:06.335940       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:10:06.336019       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1029 09:10:06.336102       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:10:06.358033       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:10:06.358097       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:10:06.364317       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:10:06.364673       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:10:06.364730       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:10:06.366149       1 config.go:200] "Starting service config controller"
	I1029 09:10:06.366168       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:10:06.366373       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:10:06.366393       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:10:06.366416       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:10:06.366421       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:10:06.366508       1 config.go:309] "Starting node config controller"
	I1029 09:10:06.366551       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:10:06.366559       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:10:06.466690       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:10:06.466712       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 09:10:06.466683       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0d384ad349a4f9e1f45c716a0c367d307afd1e671eccf883335f5764690e871a] <==
	I1029 09:10:04.279235       1 serving.go:386] Generated self-signed cert in-memory
	W1029 09:10:05.557819       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:10:05.557872       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:10:05.557886       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:10:05.557895       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:10:05.636628       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:10:05.640501       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:10:05.647615       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:10:05.647674       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:10:05.648127       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:10:05.648258       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:10:05.747866       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:10:09 embed-certs-834228 kubelet[717]: I1029 09:10:09.661099     717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7403dbe1-d1c8-4f97-8971-afca363eee74-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-l5d9b\" (UID: \"7403dbe1-d1c8-4f97-8971-afca363eee74\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b"
	Oct 29 09:10:09 embed-certs-834228 kubelet[717]: I1029 09:10:09.661155     717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpl2k\" (UniqueName: \"kubernetes.io/projected/7403dbe1-d1c8-4f97-8971-afca363eee74-kube-api-access-tpl2k\") pod \"dashboard-metrics-scraper-6ffb444bf9-l5d9b\" (UID: \"7403dbe1-d1c8-4f97-8971-afca363eee74\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b"
	Oct 29 09:10:09 embed-certs-834228 kubelet[717]: I1029 09:10:09.661177     717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hmbw\" (UniqueName: \"kubernetes.io/projected/f4b1270f-98af-4824-964a-6a694dbaa678-kube-api-access-5hmbw\") pod \"kubernetes-dashboard-855c9754f9-c42hl\" (UID: \"f4b1270f-98af-4824-964a-6a694dbaa678\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c42hl"
	Oct 29 09:10:09 embed-certs-834228 kubelet[717]: I1029 09:10:09.661197     717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f4b1270f-98af-4824-964a-6a694dbaa678-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-c42hl\" (UID: \"f4b1270f-98af-4824-964a-6a694dbaa678\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c42hl"
	Oct 29 09:10:12 embed-certs-834228 kubelet[717]: I1029 09:10:12.106975     717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 29 09:10:13 embed-certs-834228 kubelet[717]: I1029 09:10:13.747069     717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c42hl" podStartSLOduration=0.996309711 podStartE2EDuration="4.74704551s" podCreationTimestamp="2025-10-29 09:10:09 +0000 UTC" firstStartedPulling="2025-10-29 09:10:09.895548526 +0000 UTC m=+7.355748099" lastFinishedPulling="2025-10-29 09:10:13.64628432 +0000 UTC m=+11.106483898" observedRunningTime="2025-10-29 09:10:13.746459493 +0000 UTC m=+11.206659054" watchObservedRunningTime="2025-10-29 09:10:13.74704551 +0000 UTC m=+11.207245090"
	Oct 29 09:10:17 embed-certs-834228 kubelet[717]: I1029 09:10:17.749362     717 scope.go:117] "RemoveContainer" containerID="2bf754951b4ba841429d1416789ba3bb24d18205f9354ded450d8814d4a5f762"
	Oct 29 09:10:18 embed-certs-834228 kubelet[717]: I1029 09:10:18.755003     717 scope.go:117] "RemoveContainer" containerID="2bf754951b4ba841429d1416789ba3bb24d18205f9354ded450d8814d4a5f762"
	Oct 29 09:10:18 embed-certs-834228 kubelet[717]: I1029 09:10:18.755193     717 scope.go:117] "RemoveContainer" containerID="9ded5dc5462bddc38938c5b7bf5e838234574f90c31cf22183d27cb37f92d68b"
	Oct 29 09:10:18 embed-certs-834228 kubelet[717]: E1029 09:10:18.755423     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5d9b_kubernetes-dashboard(7403dbe1-d1c8-4f97-8971-afca363eee74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b" podUID="7403dbe1-d1c8-4f97-8971-afca363eee74"
	Oct 29 09:10:19 embed-certs-834228 kubelet[717]: I1029 09:10:19.760623     717 scope.go:117] "RemoveContainer" containerID="9ded5dc5462bddc38938c5b7bf5e838234574f90c31cf22183d27cb37f92d68b"
	Oct 29 09:10:19 embed-certs-834228 kubelet[717]: E1029 09:10:19.760808     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5d9b_kubernetes-dashboard(7403dbe1-d1c8-4f97-8971-afca363eee74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b" podUID="7403dbe1-d1c8-4f97-8971-afca363eee74"
	Oct 29 09:10:25 embed-certs-834228 kubelet[717]: I1029 09:10:25.191734     717 scope.go:117] "RemoveContainer" containerID="9ded5dc5462bddc38938c5b7bf5e838234574f90c31cf22183d27cb37f92d68b"
	Oct 29 09:10:25 embed-certs-834228 kubelet[717]: E1029 09:10:25.191978     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5d9b_kubernetes-dashboard(7403dbe1-d1c8-4f97-8971-afca363eee74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b" podUID="7403dbe1-d1c8-4f97-8971-afca363eee74"
	Oct 29 09:10:35 embed-certs-834228 kubelet[717]: I1029 09:10:35.642538     717 scope.go:117] "RemoveContainer" containerID="9ded5dc5462bddc38938c5b7bf5e838234574f90c31cf22183d27cb37f92d68b"
	Oct 29 09:10:35 embed-certs-834228 kubelet[717]: I1029 09:10:35.805251     717 scope.go:117] "RemoveContainer" containerID="9ded5dc5462bddc38938c5b7bf5e838234574f90c31cf22183d27cb37f92d68b"
	Oct 29 09:10:35 embed-certs-834228 kubelet[717]: I1029 09:10:35.805473     717 scope.go:117] "RemoveContainer" containerID="0fe34722ad970790d7f290f400d34568b18937464fe5d1e524c48438c7d600fc"
	Oct 29 09:10:35 embed-certs-834228 kubelet[717]: E1029 09:10:35.805696     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5d9b_kubernetes-dashboard(7403dbe1-d1c8-4f97-8971-afca363eee74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b" podUID="7403dbe1-d1c8-4f97-8971-afca363eee74"
	Oct 29 09:10:36 embed-certs-834228 kubelet[717]: I1029 09:10:36.811397     717 scope.go:117] "RemoveContainer" containerID="4b8003b860d3d3cd1d5902b956a476ee7b96cbf48d1a25c4f3a01fd291cac8ae"
	Oct 29 09:10:45 embed-certs-834228 kubelet[717]: I1029 09:10:45.191876     717 scope.go:117] "RemoveContainer" containerID="0fe34722ad970790d7f290f400d34568b18937464fe5d1e524c48438c7d600fc"
	Oct 29 09:10:45 embed-certs-834228 kubelet[717]: E1029 09:10:45.192223     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5d9b_kubernetes-dashboard(7403dbe1-d1c8-4f97-8971-afca363eee74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b" podUID="7403dbe1-d1c8-4f97-8971-afca363eee74"
	Oct 29 09:10:57 embed-certs-834228 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:10:57 embed-certs-834228 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:10:57 embed-certs-834228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 29 09:10:57 embed-certs-834228 systemd[1]: kubelet.service: Consumed 1.853s CPU time.
	
	
	==> kubernetes-dashboard [78b8db7671738ad247acc38622873834d4d545afb33749490f066640abb90793] <==
	2025/10/29 09:10:13 Starting overwatch
	2025/10/29 09:10:13 Using namespace: kubernetes-dashboard
	2025/10/29 09:10:13 Using in-cluster config to connect to apiserver
	2025/10/29 09:10:13 Using secret token for csrf signing
	2025/10/29 09:10:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 09:10:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 09:10:13 Successful initial request to the apiserver, version: v1.34.1
	2025/10/29 09:10:13 Generating JWE encryption key
	2025/10/29 09:10:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 09:10:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 09:10:14 Initializing JWE encryption key from synchronized object
	2025/10/29 09:10:14 Creating in-cluster Sidecar client
	2025/10/29 09:10:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:10:14 Serving insecurely on HTTP port: 9090
	2025/10/29 09:10:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4b8003b860d3d3cd1d5902b956a476ee7b96cbf48d1a25c4f3a01fd291cac8ae] <==
	I1029 09:10:06.090811       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1029 09:10:36.093269       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [99394d937439462468398408a13c0dbc12c367c545eed7771a9818856c9c2fe4] <==
	I1029 09:10:36.863084       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:10:36.871077       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:10:36.871135       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:10:36.873867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:40.328970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:44.589954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:48.189059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:51.243509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:54.266226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:54.274949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:10:54.275180       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:10:54.275379       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-834228_a6f275e9-629d-4d58-91ca-2adc5ce50bea!
	I1029 09:10:54.275423       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b974d585-d04f-4d47-a5da-d6dd7320fe4f", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-834228_a6f275e9-629d-4d58-91ca-2adc5ce50bea became leader
	W1029 09:10:54.277958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:54.281535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:10:54.376261       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-834228_a6f275e9-629d-4d58-91ca-2adc5ce50bea!
	W1029 09:10:56.285054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:56.289399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:58.293317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:58.298461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:00.303257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:00.310779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-834228 -n embed-certs-834228
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-834228 -n embed-certs-834228: exit status 2 (393.185629ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-834228 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-834228
helpers_test.go:243: (dbg) docker inspect embed-certs-834228:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1",
	        "Created": "2025-10-29T09:08:47.072061223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 310541,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:09:55.954614638Z",
	            "FinishedAt": "2025-10-29T09:09:54.978539021Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1/hostname",
	        "HostsPath": "/var/lib/docker/containers/078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1/hosts",
	        "LogPath": "/var/lib/docker/containers/078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1/078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1-json.log",
	        "Name": "/embed-certs-834228",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-834228:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-834228",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "078bf67023c032ac4868894482b69f6ff64a850506385f376670b0ce81cdd3f1",
	                "LowerDir": "/var/lib/docker/overlay2/7acb3c11d92fdc34d3b3d58e00654a2a17b5843585f3e9de7e99b9f5cf5070f4-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7acb3c11d92fdc34d3b3d58e00654a2a17b5843585f3e9de7e99b9f5cf5070f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7acb3c11d92fdc34d3b3d58e00654a2a17b5843585f3e9de7e99b9f5cf5070f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7acb3c11d92fdc34d3b3d58e00654a2a17b5843585f3e9de7e99b9f5cf5070f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-834228",
	                "Source": "/var/lib/docker/volumes/embed-certs-834228/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-834228",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-834228",
	                "name.minikube.sigs.k8s.io": "embed-certs-834228",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f837e4bbb0389d8aa93b50ccf39c406c8469db17210f2e441b5708b8229a276f",
	            "SandboxKey": "/var/run/docker/netns/f837e4bbb038",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-834228": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:49:7f:1e:e8:4e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "86d19029abe0aa5f7ddaf42bf75485455d5c473387cb83ef6c0d4c78851e1205",
	                    "EndpointID": "926ba760a71646fffe60727e536bdd963b828ab3f098ec718803cf88fb5aec60",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-834228",
	                        "078bf67023c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-834228 -n embed-certs-834228
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-834228 -n embed-certs-834228: exit status 2 (383.490309ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-834228 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-834228 logs -n 25: (1.307440882s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p old-k8s-version-096492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p old-k8s-version-096492 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ addons  │ enable metrics-server -p embed-certs-834228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-043790 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p embed-certs-834228 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ stop    │ -p no-preload-043790 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-096492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p old-k8s-version-096492 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p embed-certs-834228 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p embed-certs-834228 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p no-preload-043790 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p no-preload-043790 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-017274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-017274 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-017274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ image   │ old-k8s-version-096492 image list --format=json                                                                                                                                                                                               │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p old-k8s-version-096492 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ image   │ embed-certs-834228 image list --format=json                                                                                                                                                                                                   │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p embed-certs-834228 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ image   │ no-preload-043790 image list --format=json                                                                                                                                                                                                    │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p no-preload-043790 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p newest-cni-259430 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:10:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:10:59.394267  323285 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:10:59.394622  323285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:59.394635  323285 out.go:374] Setting ErrFile to fd 2...
	I1029 09:10:59.394640  323285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:59.394949  323285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:10:59.395669  323285 out.go:368] Setting JSON to false
	I1029 09:10:59.397426  323285 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3207,"bootTime":1761725852,"procs":353,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:10:59.397490  323285 start.go:143] virtualization: kvm guest
	I1029 09:10:59.399709  323285 out.go:179] * [newest-cni-259430] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:10:59.401275  323285 notify.go:221] Checking for updates...
	I1029 09:10:59.401303  323285 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:10:59.402811  323285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:10:59.404227  323285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:59.405575  323285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:10:59.406888  323285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:10:59.408222  323285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:10:59.410015  323285 config.go:182] Loaded profile config "default-k8s-diff-port-017274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410148  323285 config.go:182] Loaded profile config "embed-certs-834228": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410263  323285 config.go:182] Loaded profile config "no-preload-043790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410378  323285 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:10:59.435730  323285 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:10:59.435827  323285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:59.503060  323285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-29 09:10:59.489541208 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:59.503182  323285 docker.go:319] overlay module found
	I1029 09:10:59.505269  323285 out.go:179] * Using the docker driver based on user configuration
	I1029 09:10:59.506723  323285 start.go:309] selected driver: docker
	I1029 09:10:59.506741  323285 start.go:930] validating driver "docker" against <nil>
	I1029 09:10:59.506755  323285 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:10:59.507436  323285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:59.587780  323285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-29 09:10:59.571693978 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:59.588075  323285 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1029 09:10:59.588122  323285 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1029 09:10:59.588720  323285 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:10:59.590863  323285 out.go:179] * Using Docker driver with root privileges
	I1029 09:10:59.592506  323285 cni.go:84] Creating CNI manager for ""
	I1029 09:10:59.592592  323285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:10:59.592606  323285 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:10:59.592730  323285 start.go:353] cluster config:
	{Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:10:59.594390  323285 out.go:179] * Starting "newest-cni-259430" primary control-plane node in "newest-cni-259430" cluster
	I1029 09:10:59.595763  323285 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:10:59.597231  323285 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:10:59.598574  323285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:10:59.598631  323285 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:10:59.598649  323285 cache.go:59] Caching tarball of preloaded images
	I1029 09:10:59.598672  323285 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:10:59.598768  323285 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:10:59.598779  323285 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:10:59.598919  323285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json ...
	I1029 09:10:59.598949  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json: {Name:mked5dfa4485c424df381c0f3cdc9d7d7ae817f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:59.625501  323285 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:10:59.625521  323285 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:10:59.625543  323285 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:10:59.625570  323285 start.go:360] acquireMachinesLock for newest-cni-259430: {Name:mk9f7a4924e0dc30dd9007c8d213cb8c4076ee8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:10:59.625670  323285 start.go:364] duration metric: took 83.48µs to acquireMachinesLock for "newest-cni-259430"
	I1029 09:10:59.625695  323285 start.go:93] Provisioning new machine with config: &{Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:10:59.625758  323285 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 29 09:10:17 embed-certs-834228 crio[560]: time="2025-10-29T09:10:17.817081997Z" level=info msg="Started container" PID=1722 containerID=9ded5dc5462bddc38938c5b7bf5e838234574f90c31cf22183d27cb37f92d68b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b/dashboard-metrics-scraper id=90b8d865-7bad-44bb-a330-61ca98ca8c50 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f25e0d9988718bd3575e4611b9d09b38dc806d77f48c5eb63b00738f084e6f8f
	Oct 29 09:10:18 embed-certs-834228 crio[560]: time="2025-10-29T09:10:18.765478878Z" level=info msg="Removing container: 2bf754951b4ba841429d1416789ba3bb24d18205f9354ded450d8814d4a5f762" id=8ff5f0a7-f4f5-4d4d-8d5d-2ff03ce787ee name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:18 embed-certs-834228 crio[560]: time="2025-10-29T09:10:18.825815323Z" level=info msg="Removed container 2bf754951b4ba841429d1416789ba3bb24d18205f9354ded450d8814d4a5f762: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b/dashboard-metrics-scraper" id=8ff5f0a7-f4f5-4d4d-8d5d-2ff03ce787ee name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.643130293Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=606b2f7e-d8ec-4e75-843b-f98757f52722 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.644235962Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=898386f4-a63f-467d-98b4-18683e2e966b name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.645310506Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b/dashboard-metrics-scraper" id=5b1d2342-87c3-4a0e-bacc-8bccaf00d00d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.645461148Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.652466447Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.653168471Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.681867152Z" level=info msg="Created container 0fe34722ad970790d7f290f400d34568b18937464fe5d1e524c48438c7d600fc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b/dashboard-metrics-scraper" id=5b1d2342-87c3-4a0e-bacc-8bccaf00d00d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.682548372Z" level=info msg="Starting container: 0fe34722ad970790d7f290f400d34568b18937464fe5d1e524c48438c7d600fc" id=046724c6-3997-4b9e-9cac-060b276239a1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.684660769Z" level=info msg="Started container" PID=1732 containerID=0fe34722ad970790d7f290f400d34568b18937464fe5d1e524c48438c7d600fc description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b/dashboard-metrics-scraper id=046724c6-3997-4b9e-9cac-060b276239a1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f25e0d9988718bd3575e4611b9d09b38dc806d77f48c5eb63b00738f084e6f8f
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.806566254Z" level=info msg="Removing container: 9ded5dc5462bddc38938c5b7bf5e838234574f90c31cf22183d27cb37f92d68b" id=f10d12b2-eeed-4780-aa8d-cbf86a148dac name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:35 embed-certs-834228 crio[560]: time="2025-10-29T09:10:35.817584364Z" level=info msg="Removed container 9ded5dc5462bddc38938c5b7bf5e838234574f90c31cf22183d27cb37f92d68b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b/dashboard-metrics-scraper" id=f10d12b2-eeed-4780-aa8d-cbf86a148dac name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.811805353Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9404538b-6d72-4829-9693-80601c551011 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.81283944Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8efe9e22-cb7d-4e8d-aa88-e050be29fb3c name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.813987112Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e5ee6279-b769-4bf9-b196-6327365c4fc5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.814169214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.818874793Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.819083049Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c2905fc6c5acbf5c484b443007d3b497a12ec652272d105ef408882ca993750e/merged/etc/passwd: no such file or directory"
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.819116231Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c2905fc6c5acbf5c484b443007d3b497a12ec652272d105ef408882ca993750e/merged/etc/group: no such file or directory"
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.819415843Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.847311314Z" level=info msg="Created container 99394d937439462468398408a13c0dbc12c367c545eed7771a9818856c9c2fe4: kube-system/storage-provisioner/storage-provisioner" id=e5ee6279-b769-4bf9-b196-6327365c4fc5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.84801127Z" level=info msg="Starting container: 99394d937439462468398408a13c0dbc12c367c545eed7771a9818856c9c2fe4" id=277021ed-9b0c-48ef-ae4f-833e64c69bfa name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:10:36 embed-certs-834228 crio[560]: time="2025-10-29T09:10:36.850093501Z" level=info msg="Started container" PID=1746 containerID=99394d937439462468398408a13c0dbc12c367c545eed7771a9818856c9c2fe4 description=kube-system/storage-provisioner/storage-provisioner id=277021ed-9b0c-48ef-ae4f-833e64c69bfa name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9efc6f74338a7d12a3c9fb968b2a6f6af2a2ad198a45e2ad041f85efee0e8fc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	99394d9374394       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   c9efc6f74338a       storage-provisioner                          kube-system
	0fe34722ad970       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   f25e0d9988718       dashboard-metrics-scraper-6ffb444bf9-l5d9b   kubernetes-dashboard
	78b8db7671738       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   803866a2a7b02       kubernetes-dashboard-855c9754f9-c42hl        kubernetes-dashboard
	6059afdcbebe5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   4e934fd2aa907       coredns-66bc5c9577-w9vf6                     kube-system
	016253316a2e4       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   5528cdd9c56bb       busybox                                      default
	4b8003b860d3d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   c9efc6f74338a       storage-provisioner                          kube-system
	f5bd47442b5f7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   f69d5ba3c7c6c       kube-proxy-bxthb                             kube-system
	e17241a0d1168       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   59837b14e23d1       kindnet-dgkfz                                kube-system
	66aa912baa9af       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   cece7a64b45f4       kube-controller-manager-embed-certs-834228   kube-system
	b1e012893324d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   7256ed6272871       etcd-embed-certs-834228                      kube-system
	0d384ad349a4f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   4da5cf70ed7d2       kube-scheduler-embed-certs-834228            kube-system
	f516353885ecb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   7abf27d9ba477       kube-apiserver-embed-certs-834228            kube-system
	
	
	==> coredns [6059afdcbebe5ca0a661823dd7be248fa4f6a9c15ea88db99e03727fb7d8a75e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60698 - 49364 "HINFO IN 2456188340904353391.2179157063304168297. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.064360292s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-834228
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-834228
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=embed-certs-834228
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_09_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:09:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-834228
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:10:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:10:36 +0000   Wed, 29 Oct 2025 09:09:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:10:36 +0000   Wed, 29 Oct 2025 09:09:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:10:36 +0000   Wed, 29 Oct 2025 09:09:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:10:36 +0000   Wed, 29 Oct 2025 09:09:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-834228
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                d617b188-ae12-430b-83d6-e9ef5bc4858e
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-w9vf6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-embed-certs-834228                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-dgkfz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-embed-certs-834228             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-834228    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-bxthb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-embed-certs-834228             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-l5d9b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-c42hl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node embed-certs-834228 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node embed-certs-834228 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node embed-certs-834228 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node embed-certs-834228 event: Registered Node embed-certs-834228 in Controller
	  Normal  NodeReady                98s                kubelet          Node embed-certs-834228 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node embed-certs-834228 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node embed-certs-834228 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node embed-certs-834228 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node embed-certs-834228 event: Registered Node embed-certs-834228 in Controller
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[  +7.860471] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea d1 bc 55 fa d3 08 06
	[  +0.057230] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[  +7.379065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 86 de 12 5c b8 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[Oct29 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 3e ab f0 10 3c 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 16 ea 00 8c 1d 08 06
	[  +4.650960] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 a3 db 56 0e bd 08 06
	[  +0.000357] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[ +10.158654] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	[Oct29 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 62 44 12 a2 6e 08 06
	[  +0.000472] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	
	
	==> etcd [b1e012893324df4a540415d2a2a886bc9306d87f1be54870a37e70562f009661] <==
	{"level":"warn","ts":"2025-10-29T09:10:04.752127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.763289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.772533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.781788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.790583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.804285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.813862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.824746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.837814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.850287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.862558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.874553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.886305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.893580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.903608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.912418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.920923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.933209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.939301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.949501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.956540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.975700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.985525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:04.994188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.051705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45216","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:11:02 up 53 min,  0 user,  load average: 5.89, 4.35, 2.70
	Linux embed-certs-834228 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e17241a0d1168446f12fd7f52847d34e2e3d87b159fe20668c7e1f2d7cdefe80] <==
	I1029 09:10:06.306662       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:10:06.307147       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1029 09:10:06.307337       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:10:06.307353       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:10:06.307373       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:10:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:10:06.510822       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:10:06.511817       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:10:06.511864       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:10:06.512043       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:10:06.912683       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:10:06.912709       1 metrics.go:72] Registering metrics
	I1029 09:10:06.912767       1 controller.go:711] "Syncing nftables rules"
	I1029 09:10:16.511117       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:10:16.511184       1 main.go:301] handling current node
	I1029 09:10:26.510841       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:10:26.510924       1 main.go:301] handling current node
	I1029 09:10:36.510673       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:10:36.510727       1 main.go:301] handling current node
	I1029 09:10:46.510840       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:10:46.510873       1 main.go:301] handling current node
	I1029 09:10:56.519834       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1029 09:10:56.519870       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f516353885ecbd2eb5072fd9daac8f0cc0f088a1992d0c02fe4ca4ec5d2f5393] <==
	I1029 09:10:05.631492       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:10:05.632480       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1029 09:10:05.638833       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 09:10:05.640015       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 09:10:05.643039       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:10:05.643061       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1029 09:10:05.647098       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:10:05.647528       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:10:05.647565       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:10:05.647592       1 cache.go:39] Caches are synced for autoregister controller
	E1029 09:10:05.671689       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 09:10:05.680335       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:10:05.682533       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1029 09:10:05.729588       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1029 09:10:05.736783       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:10:05.981904       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:10:06.037547       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:10:06.085335       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:10:06.098722       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:10:06.149963       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.141.180"}
	I1029 09:10:06.164549       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.73.29"}
	I1029 09:10:06.537700       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:10:09.086257       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:10:09.285723       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:10:09.636038       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [66aa912baa9af98f94ecb5b252508da4dacdaa895aab155c9bbd90f2b07a6d67] <==
	I1029 09:10:09.005511       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:10:09.007849       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 09:10:09.010466       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:10:09.012816       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:10:09.015110       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1029 09:10:09.023375       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1029 09:10:09.025749       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1029 09:10:09.031293       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 09:10:09.031425       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 09:10:09.031498       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:10:09.031510       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:10:09.031519       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:10:09.031965       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1029 09:10:09.033002       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:10:09.033024       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1029 09:10:09.033038       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1029 09:10:09.033071       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:10:09.033092       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1029 09:10:09.034430       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 09:10:09.034580       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 09:10:09.034640       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-834228"
	I1029 09:10:09.034682       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1029 09:10:09.037731       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:10:09.051604       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:10:09.059837       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f5bd47442b5f7f9b7a37ce50069bed596566fa194f52aaa92883fd90301afcf4] <==
	I1029 09:10:06.138963       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:10:06.235387       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:10:06.335940       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:10:06.336019       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1029 09:10:06.336102       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:10:06.358033       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:10:06.358097       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:10:06.364317       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:10:06.364673       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:10:06.364730       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:10:06.366149       1 config.go:200] "Starting service config controller"
	I1029 09:10:06.366168       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:10:06.366373       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:10:06.366393       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:10:06.366416       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:10:06.366421       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:10:06.366508       1 config.go:309] "Starting node config controller"
	I1029 09:10:06.366551       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:10:06.366559       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:10:06.466690       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:10:06.466712       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 09:10:06.466683       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0d384ad349a4f9e1f45c716a0c367d307afd1e671eccf883335f5764690e871a] <==
	I1029 09:10:04.279235       1 serving.go:386] Generated self-signed cert in-memory
	W1029 09:10:05.557819       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:10:05.557872       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:10:05.557886       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:10:05.557895       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:10:05.636628       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:10:05.640501       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:10:05.647615       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:10:05.647674       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:10:05.648127       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:10:05.648258       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:10:05.747866       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:10:09 embed-certs-834228 kubelet[717]: I1029 09:10:09.661099     717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7403dbe1-d1c8-4f97-8971-afca363eee74-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-l5d9b\" (UID: \"7403dbe1-d1c8-4f97-8971-afca363eee74\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b"
	Oct 29 09:10:09 embed-certs-834228 kubelet[717]: I1029 09:10:09.661155     717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpl2k\" (UniqueName: \"kubernetes.io/projected/7403dbe1-d1c8-4f97-8971-afca363eee74-kube-api-access-tpl2k\") pod \"dashboard-metrics-scraper-6ffb444bf9-l5d9b\" (UID: \"7403dbe1-d1c8-4f97-8971-afca363eee74\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b"
	Oct 29 09:10:09 embed-certs-834228 kubelet[717]: I1029 09:10:09.661177     717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hmbw\" (UniqueName: \"kubernetes.io/projected/f4b1270f-98af-4824-964a-6a694dbaa678-kube-api-access-5hmbw\") pod \"kubernetes-dashboard-855c9754f9-c42hl\" (UID: \"f4b1270f-98af-4824-964a-6a694dbaa678\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c42hl"
	Oct 29 09:10:09 embed-certs-834228 kubelet[717]: I1029 09:10:09.661197     717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f4b1270f-98af-4824-964a-6a694dbaa678-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-c42hl\" (UID: \"f4b1270f-98af-4824-964a-6a694dbaa678\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c42hl"
	Oct 29 09:10:12 embed-certs-834228 kubelet[717]: I1029 09:10:12.106975     717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 29 09:10:13 embed-certs-834228 kubelet[717]: I1029 09:10:13.747069     717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c42hl" podStartSLOduration=0.996309711 podStartE2EDuration="4.74704551s" podCreationTimestamp="2025-10-29 09:10:09 +0000 UTC" firstStartedPulling="2025-10-29 09:10:09.895548526 +0000 UTC m=+7.355748099" lastFinishedPulling="2025-10-29 09:10:13.64628432 +0000 UTC m=+11.106483898" observedRunningTime="2025-10-29 09:10:13.746459493 +0000 UTC m=+11.206659054" watchObservedRunningTime="2025-10-29 09:10:13.74704551 +0000 UTC m=+11.207245090"
	Oct 29 09:10:17 embed-certs-834228 kubelet[717]: I1029 09:10:17.749362     717 scope.go:117] "RemoveContainer" containerID="2bf754951b4ba841429d1416789ba3bb24d18205f9354ded450d8814d4a5f762"
	Oct 29 09:10:18 embed-certs-834228 kubelet[717]: I1029 09:10:18.755003     717 scope.go:117] "RemoveContainer" containerID="2bf754951b4ba841429d1416789ba3bb24d18205f9354ded450d8814d4a5f762"
	Oct 29 09:10:18 embed-certs-834228 kubelet[717]: I1029 09:10:18.755193     717 scope.go:117] "RemoveContainer" containerID="9ded5dc5462bddc38938c5b7bf5e838234574f90c31cf22183d27cb37f92d68b"
	Oct 29 09:10:18 embed-certs-834228 kubelet[717]: E1029 09:10:18.755423     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5d9b_kubernetes-dashboard(7403dbe1-d1c8-4f97-8971-afca363eee74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b" podUID="7403dbe1-d1c8-4f97-8971-afca363eee74"
	Oct 29 09:10:19 embed-certs-834228 kubelet[717]: I1029 09:10:19.760623     717 scope.go:117] "RemoveContainer" containerID="9ded5dc5462bddc38938c5b7bf5e838234574f90c31cf22183d27cb37f92d68b"
	Oct 29 09:10:19 embed-certs-834228 kubelet[717]: E1029 09:10:19.760808     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5d9b_kubernetes-dashboard(7403dbe1-d1c8-4f97-8971-afca363eee74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b" podUID="7403dbe1-d1c8-4f97-8971-afca363eee74"
	Oct 29 09:10:25 embed-certs-834228 kubelet[717]: I1029 09:10:25.191734     717 scope.go:117] "RemoveContainer" containerID="9ded5dc5462bddc38938c5b7bf5e838234574f90c31cf22183d27cb37f92d68b"
	Oct 29 09:10:25 embed-certs-834228 kubelet[717]: E1029 09:10:25.191978     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5d9b_kubernetes-dashboard(7403dbe1-d1c8-4f97-8971-afca363eee74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b" podUID="7403dbe1-d1c8-4f97-8971-afca363eee74"
	Oct 29 09:10:35 embed-certs-834228 kubelet[717]: I1029 09:10:35.642538     717 scope.go:117] "RemoveContainer" containerID="9ded5dc5462bddc38938c5b7bf5e838234574f90c31cf22183d27cb37f92d68b"
	Oct 29 09:10:35 embed-certs-834228 kubelet[717]: I1029 09:10:35.805251     717 scope.go:117] "RemoveContainer" containerID="9ded5dc5462bddc38938c5b7bf5e838234574f90c31cf22183d27cb37f92d68b"
	Oct 29 09:10:35 embed-certs-834228 kubelet[717]: I1029 09:10:35.805473     717 scope.go:117] "RemoveContainer" containerID="0fe34722ad970790d7f290f400d34568b18937464fe5d1e524c48438c7d600fc"
	Oct 29 09:10:35 embed-certs-834228 kubelet[717]: E1029 09:10:35.805696     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5d9b_kubernetes-dashboard(7403dbe1-d1c8-4f97-8971-afca363eee74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b" podUID="7403dbe1-d1c8-4f97-8971-afca363eee74"
	Oct 29 09:10:36 embed-certs-834228 kubelet[717]: I1029 09:10:36.811397     717 scope.go:117] "RemoveContainer" containerID="4b8003b860d3d3cd1d5902b956a476ee7b96cbf48d1a25c4f3a01fd291cac8ae"
	Oct 29 09:10:45 embed-certs-834228 kubelet[717]: I1029 09:10:45.191876     717 scope.go:117] "RemoveContainer" containerID="0fe34722ad970790d7f290f400d34568b18937464fe5d1e524c48438c7d600fc"
	Oct 29 09:10:45 embed-certs-834228 kubelet[717]: E1029 09:10:45.192223     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5d9b_kubernetes-dashboard(7403dbe1-d1c8-4f97-8971-afca363eee74)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5d9b" podUID="7403dbe1-d1c8-4f97-8971-afca363eee74"
	Oct 29 09:10:57 embed-certs-834228 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:10:57 embed-certs-834228 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:10:57 embed-certs-834228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 29 09:10:57 embed-certs-834228 systemd[1]: kubelet.service: Consumed 1.853s CPU time.
	
	
	==> kubernetes-dashboard [78b8db7671738ad247acc38622873834d4d545afb33749490f066640abb90793] <==
	2025/10/29 09:10:13 Starting overwatch
	2025/10/29 09:10:13 Using namespace: kubernetes-dashboard
	2025/10/29 09:10:13 Using in-cluster config to connect to apiserver
	2025/10/29 09:10:13 Using secret token for csrf signing
	2025/10/29 09:10:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 09:10:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 09:10:13 Successful initial request to the apiserver, version: v1.34.1
	2025/10/29 09:10:13 Generating JWE encryption key
	2025/10/29 09:10:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 09:10:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 09:10:14 Initializing JWE encryption key from synchronized object
	2025/10/29 09:10:14 Creating in-cluster Sidecar client
	2025/10/29 09:10:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:10:14 Serving insecurely on HTTP port: 9090
	2025/10/29 09:10:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4b8003b860d3d3cd1d5902b956a476ee7b96cbf48d1a25c4f3a01fd291cac8ae] <==
	I1029 09:10:06.090811       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1029 09:10:36.093269       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [99394d937439462468398408a13c0dbc12c367c545eed7771a9818856c9c2fe4] <==
	I1029 09:10:36.863084       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:10:36.871077       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:10:36.871135       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:10:36.873867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:40.328970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:44.589954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:48.189059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:51.243509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:54.266226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:54.274949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:10:54.275180       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:10:54.275379       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-834228_a6f275e9-629d-4d58-91ca-2adc5ce50bea!
	I1029 09:10:54.275423       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b974d585-d04f-4d47-a5da-d6dd7320fe4f", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-834228_a6f275e9-629d-4d58-91ca-2adc5ce50bea became leader
	W1029 09:10:54.277958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:54.281535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:10:54.376261       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-834228_a6f275e9-629d-4d58-91ca-2adc5ce50bea!
	W1029 09:10:56.285054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:56.289399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:58.293317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:58.298461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:00.303257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:00.310779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:02.315229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:02.320547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-834228 -n embed-certs-834228
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-834228 -n embed-certs-834228: exit status 2 (363.922558ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-834228 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-043790 --alsologtostderr -v=1
E1029 09:10:58.243201    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/kindnet-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-043790 --alsologtostderr -v=1: exit status 80 (1.711105748s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-043790 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:10:58.144648  322692 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:10:58.144934  322692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:58.144948  322692 out.go:374] Setting ErrFile to fd 2...
	I1029 09:10:58.144955  322692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:58.145257  322692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:10:58.145501  322692 out.go:368] Setting JSON to false
	I1029 09:10:58.145523  322692 mustload.go:66] Loading cluster: no-preload-043790
	I1029 09:10:58.146107  322692 config.go:182] Loaded profile config "no-preload-043790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:58.146494  322692 cli_runner.go:164] Run: docker container inspect no-preload-043790 --format={{.State.Status}}
	I1029 09:10:58.166110  322692 host.go:66] Checking if "no-preload-043790" exists ...
	I1029 09:10:58.166390  322692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:58.242951  322692 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-29 09:10:58.232250708 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:58.243708  322692 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-043790 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1029 09:10:58.250167  322692 out.go:179] * Pausing node no-preload-043790 ... 
	I1029 09:10:58.252453  322692 host.go:66] Checking if "no-preload-043790" exists ...
	I1029 09:10:58.252841  322692 ssh_runner.go:195] Run: systemctl --version
	I1029 09:10:58.252895  322692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-043790
	I1029 09:10:58.273223  322692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/no-preload-043790/id_rsa Username:docker}
	I1029 09:10:58.378508  322692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:58.405120  322692 pause.go:52] kubelet running: true
	I1029 09:10:58.405196  322692 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:10:58.583018  322692 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:10:58.583130  322692 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:10:58.659535  322692 cri.go:89] found id: "7da3212bbc11f7a7d14e8d97fcf2179d4aedbe928c922232c448461cf06ed14a"
	I1029 09:10:58.659564  322692 cri.go:89] found id: "b489e2ba9e90c267cfa27ab1c914b7469058dfffa21f018ab0528e3073136a8d"
	I1029 09:10:58.659571  322692 cri.go:89] found id: "fa5efc086742b530fe5144b57cb128e0ddfb74ce8e5e2c6464551027e860b71e"
	I1029 09:10:58.659576  322692 cri.go:89] found id: "fb2c325d870e27637e135d08851105d6ba6af229cdf2312e1ed2631ee749d98a"
	I1029 09:10:58.659580  322692 cri.go:89] found id: "bc35edff0806a2c02a9e819af40f13f2fcb050f39073754abfad9eb0cccd877a"
	I1029 09:10:58.659585  322692 cri.go:89] found id: "90320debdab793d8acd7009f9643c60d73f0cb96a8b824f6fde5cdeab7a2d1c0"
	I1029 09:10:58.659589  322692 cri.go:89] found id: "19319884348b2e0458cd97dc51733e41464962a2500ec16c77771a98ba4e8b27"
	I1029 09:10:58.659593  322692 cri.go:89] found id: "aef6cdacaff629417a19cf93c9fdd05bdebca3a660634d42e64d3d9b50f6be3b"
	I1029 09:10:58.659595  322692 cri.go:89] found id: "a8e01bc837509e1a7e1a5c19a35ea64e574acd55a0d06c30f68441a4dc29ff7c"
	I1029 09:10:58.659601  322692 cri.go:89] found id: "13ade4866a502df7776458724863b98ddc6e6380140bc43a0e27fc169685353d"
	I1029 09:10:58.659604  322692 cri.go:89] found id: "3f51be0b0b47d1c0f31d34438f6201f85142e47f56441e53f6dfcd7ce23b9531"
	I1029 09:10:58.659607  322692 cri.go:89] found id: ""
	I1029 09:10:58.659651  322692 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:10:58.671973  322692 retry.go:31] will retry after 182.718315ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:58Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:10:58.855319  322692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:58.887971  322692 pause.go:52] kubelet running: false
	I1029 09:10:58.888071  322692 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:10:59.049535  322692 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:10:59.049611  322692 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:10:59.133256  322692 cri.go:89] found id: "7da3212bbc11f7a7d14e8d97fcf2179d4aedbe928c922232c448461cf06ed14a"
	I1029 09:10:59.133298  322692 cri.go:89] found id: "b489e2ba9e90c267cfa27ab1c914b7469058dfffa21f018ab0528e3073136a8d"
	I1029 09:10:59.133303  322692 cri.go:89] found id: "fa5efc086742b530fe5144b57cb128e0ddfb74ce8e5e2c6464551027e860b71e"
	I1029 09:10:59.133308  322692 cri.go:89] found id: "fb2c325d870e27637e135d08851105d6ba6af229cdf2312e1ed2631ee749d98a"
	I1029 09:10:59.133312  322692 cri.go:89] found id: "bc35edff0806a2c02a9e819af40f13f2fcb050f39073754abfad9eb0cccd877a"
	I1029 09:10:59.133317  322692 cri.go:89] found id: "90320debdab793d8acd7009f9643c60d73f0cb96a8b824f6fde5cdeab7a2d1c0"
	I1029 09:10:59.133321  322692 cri.go:89] found id: "19319884348b2e0458cd97dc51733e41464962a2500ec16c77771a98ba4e8b27"
	I1029 09:10:59.133325  322692 cri.go:89] found id: "aef6cdacaff629417a19cf93c9fdd05bdebca3a660634d42e64d3d9b50f6be3b"
	I1029 09:10:59.133330  322692 cri.go:89] found id: "a8e01bc837509e1a7e1a5c19a35ea64e574acd55a0d06c30f68441a4dc29ff7c"
	I1029 09:10:59.133350  322692 cri.go:89] found id: "13ade4866a502df7776458724863b98ddc6e6380140bc43a0e27fc169685353d"
	I1029 09:10:59.133356  322692 cri.go:89] found id: "3f51be0b0b47d1c0f31d34438f6201f85142e47f56441e53f6dfcd7ce23b9531"
	I1029 09:10:59.133359  322692 cri.go:89] found id: ""
	I1029 09:10:59.133405  322692 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:10:59.148964  322692 retry.go:31] will retry after 311.552185ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:59Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:10:59.461523  322692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:10:59.479117  322692 pause.go:52] kubelet running: false
	I1029 09:10:59.479591  322692 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:10:59.674540  322692 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:10:59.674620  322692 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:10:59.761703  322692 cri.go:89] found id: "7da3212bbc11f7a7d14e8d97fcf2179d4aedbe928c922232c448461cf06ed14a"
	I1029 09:10:59.761739  322692 cri.go:89] found id: "b489e2ba9e90c267cfa27ab1c914b7469058dfffa21f018ab0528e3073136a8d"
	I1029 09:10:59.761747  322692 cri.go:89] found id: "fa5efc086742b530fe5144b57cb128e0ddfb74ce8e5e2c6464551027e860b71e"
	I1029 09:10:59.761752  322692 cri.go:89] found id: "fb2c325d870e27637e135d08851105d6ba6af229cdf2312e1ed2631ee749d98a"
	I1029 09:10:59.761766  322692 cri.go:89] found id: "bc35edff0806a2c02a9e819af40f13f2fcb050f39073754abfad9eb0cccd877a"
	I1029 09:10:59.761771  322692 cri.go:89] found id: "90320debdab793d8acd7009f9643c60d73f0cb96a8b824f6fde5cdeab7a2d1c0"
	I1029 09:10:59.761775  322692 cri.go:89] found id: "19319884348b2e0458cd97dc51733e41464962a2500ec16c77771a98ba4e8b27"
	I1029 09:10:59.761779  322692 cri.go:89] found id: "aef6cdacaff629417a19cf93c9fdd05bdebca3a660634d42e64d3d9b50f6be3b"
	I1029 09:10:59.761784  322692 cri.go:89] found id: "a8e01bc837509e1a7e1a5c19a35ea64e574acd55a0d06c30f68441a4dc29ff7c"
	I1029 09:10:59.761792  322692 cri.go:89] found id: "13ade4866a502df7776458724863b98ddc6e6380140bc43a0e27fc169685353d"
	I1029 09:10:59.761802  322692 cri.go:89] found id: "3f51be0b0b47d1c0f31d34438f6201f85142e47f56441e53f6dfcd7ce23b9531"
	I1029 09:10:59.761806  322692 cri.go:89] found id: ""
	I1029 09:10:59.761855  322692 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:10:59.777896  322692 out.go:203] 
	W1029 09:10:59.779344  322692 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:10:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 09:10:59.779364  322692 out.go:285] * 
	* 
	W1029 09:10:59.785807  322692 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 09:10:59.787485  322692 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-043790 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-043790
helpers_test.go:243: (dbg) docker inspect no-preload-043790:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7",
	        "Created": "2025-10-29T09:08:34.171867381Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 311173,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:09:56.426043668Z",
	            "FinishedAt": "2025-10-29T09:09:55.43221818Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7/hostname",
	        "HostsPath": "/var/lib/docker/containers/b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7/hosts",
	        "LogPath": "/var/lib/docker/containers/b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7/b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7-json.log",
	        "Name": "/no-preload-043790",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-043790:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-043790",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7",
	                "LowerDir": "/var/lib/docker/overlay2/58d942a653991abae69a0bdf6841492fca5fc3fd6fabad6f0db77f0268252ce7-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58d942a653991abae69a0bdf6841492fca5fc3fd6fabad6f0db77f0268252ce7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58d942a653991abae69a0bdf6841492fca5fc3fd6fabad6f0db77f0268252ce7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58d942a653991abae69a0bdf6841492fca5fc3fd6fabad6f0db77f0268252ce7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-043790",
	                "Source": "/var/lib/docker/volumes/no-preload-043790/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-043790",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-043790",
	                "name.minikube.sigs.k8s.io": "no-preload-043790",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ebd67f4ca8d98b74e11039a34df9f6e928ce01fa8bc581c62f58aa1d3bc00ba7",
	            "SandboxKey": "/var/run/docker/netns/ebd67f4ca8d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-043790": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:30:b7:81:9a:5e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dcc575c7384eba10361bed7adc7ddf8a9bfff63d366b63895fcc568dd1c4ba1d",
	                    "EndpointID": "c59585f41570b7cb01866977b22e03dc1be04d5fd634196f8df0ec2bbf1a3424",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-043790",
	                        "b2e7560bb45a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-043790 -n no-preload-043790
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-043790 -n no-preload-043790: exit status 2 (425.003606ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-043790 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-043790 logs -n 25: (1.335064934s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p old-k8s-version-096492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p old-k8s-version-096492 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ addons  │ enable metrics-server -p embed-certs-834228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-043790 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p embed-certs-834228 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ stop    │ -p no-preload-043790 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-096492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p old-k8s-version-096492 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p embed-certs-834228 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p embed-certs-834228 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p no-preload-043790 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p no-preload-043790 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-017274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-017274 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-017274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ image   │ old-k8s-version-096492 image list --format=json                                                                                                                                                                                               │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p old-k8s-version-096492 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ image   │ embed-certs-834228 image list --format=json                                                                                                                                                                                                   │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p embed-certs-834228 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ image   │ no-preload-043790 image list --format=json                                                                                                                                                                                                    │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p no-preload-043790 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p newest-cni-259430 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:10:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:10:59.394267  323285 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:10:59.394622  323285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:59.394635  323285 out.go:374] Setting ErrFile to fd 2...
	I1029 09:10:59.394640  323285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:59.394949  323285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:10:59.395669  323285 out.go:368] Setting JSON to false
	I1029 09:10:59.397426  323285 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3207,"bootTime":1761725852,"procs":353,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:10:59.397490  323285 start.go:143] virtualization: kvm guest
	I1029 09:10:59.399709  323285 out.go:179] * [newest-cni-259430] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:10:59.401275  323285 notify.go:221] Checking for updates...
	I1029 09:10:59.401303  323285 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:10:59.402811  323285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:10:59.404227  323285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:59.405575  323285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:10:59.406888  323285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:10:59.408222  323285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:10:59.410015  323285 config.go:182] Loaded profile config "default-k8s-diff-port-017274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410148  323285 config.go:182] Loaded profile config "embed-certs-834228": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410263  323285 config.go:182] Loaded profile config "no-preload-043790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410378  323285 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:10:59.435730  323285 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:10:59.435827  323285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:59.503060  323285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-29 09:10:59.489541208 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:59.503182  323285 docker.go:319] overlay module found
	I1029 09:10:59.505269  323285 out.go:179] * Using the docker driver based on user configuration
	I1029 09:10:59.506723  323285 start.go:309] selected driver: docker
	I1029 09:10:59.506741  323285 start.go:930] validating driver "docker" against <nil>
	I1029 09:10:59.506755  323285 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:10:59.507436  323285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:59.587780  323285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-29 09:10:59.571693978 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:59.588075  323285 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1029 09:10:59.588122  323285 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1029 09:10:59.588720  323285 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:10:59.590863  323285 out.go:179] * Using Docker driver with root privileges
	I1029 09:10:59.592506  323285 cni.go:84] Creating CNI manager for ""
	I1029 09:10:59.592592  323285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:10:59.592606  323285 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:10:59.592730  323285 start.go:353] cluster config:
	{Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:10:59.594390  323285 out.go:179] * Starting "newest-cni-259430" primary control-plane node in "newest-cni-259430" cluster
	I1029 09:10:59.595763  323285 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:10:59.597231  323285 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:10:59.598574  323285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:10:59.598631  323285 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:10:59.598649  323285 cache.go:59] Caching tarball of preloaded images
	I1029 09:10:59.598672  323285 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:10:59.598768  323285 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:10:59.598779  323285 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:10:59.598919  323285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json ...
	I1029 09:10:59.598949  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json: {Name:mked5dfa4485c424df381c0f3cdc9d7d7ae817f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:59.625501  323285 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:10:59.625521  323285 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:10:59.625543  323285 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:10:59.625570  323285 start.go:360] acquireMachinesLock for newest-cni-259430: {Name:mk9f7a4924e0dc30dd9007c8d213cb8c4076ee8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:10:59.625670  323285 start.go:364] duration metric: took 83.48µs to acquireMachinesLock for "newest-cni-259430"
	I1029 09:10:59.625695  323285 start.go:93] Provisioning new machine with config: &{Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:10:59.625758  323285 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 29 09:10:19 no-preload-043790 crio[559]: time="2025-10-29T09:10:19.413974647Z" level=info msg="Created container 3f51be0b0b47d1c0f31d34438f6201f85142e47f56441e53f6dfcd7ce23b9531: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ntb8h/kubernetes-dashboard" id=0d323533-48f2-4362-8064-7e24cbfbd3ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:19 no-preload-043790 crio[559]: time="2025-10-29T09:10:19.41473379Z" level=info msg="Starting container: 3f51be0b0b47d1c0f31d34438f6201f85142e47f56441e53f6dfcd7ce23b9531" id=2e40b815-e362-471a-814e-8332f5406d87 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:10:19 no-preload-043790 crio[559]: time="2025-10-29T09:10:19.417287848Z" level=info msg="Started container" PID=1700 containerID=3f51be0b0b47d1c0f31d34438f6201f85142e47f56441e53f6dfcd7ce23b9531 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ntb8h/kubernetes-dashboard id=2e40b815-e362-471a-814e-8332f5406d87 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e7b7a6f20cb3378b3f60d07a9e801dadb2e73975d116a4dfbffcd4a1ad09cce6
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.106172149Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=98188917-e07b-40f9-9106-6d2ac8edd18e name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.107086923Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=677047a2-795d-43d8-ba44-5e0e6cfb544a name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.108177862Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj/dashboard-metrics-scraper" id=1c725c58-dd66-4f17-9877-14835aae14a0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.108316146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.114110586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.114878748Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.145448909Z" level=info msg="Created container 13ade4866a502df7776458724863b98ddc6e6380140bc43a0e27fc169685353d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj/dashboard-metrics-scraper" id=1c725c58-dd66-4f17-9877-14835aae14a0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.146091289Z" level=info msg="Starting container: 13ade4866a502df7776458724863b98ddc6e6380140bc43a0e27fc169685353d" id=70cff73c-366e-4e15-a3ea-6ae1aac1eb33 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.148410719Z" level=info msg="Started container" PID=1718 containerID=13ade4866a502df7776458724863b98ddc6e6380140bc43a0e27fc169685353d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj/dashboard-metrics-scraper id=70cff73c-366e-4e15-a3ea-6ae1aac1eb33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c032868ff8717e79b48e1515e3cc6e9317e0fdae4c2c883c7a456e6523d68500
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.273185933Z" level=info msg="Removing container: 06742da1e8539d79ee0a2276f40dc700a9d214090f8eaac91d7d819589e3eefc" id=897fb620-ea15-4d49-8ee2-7633d8318282 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.28380213Z" level=info msg="Removed container 06742da1e8539d79ee0a2276f40dc700a9d214090f8eaac91d7d819589e3eefc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj/dashboard-metrics-scraper" id=897fb620-ea15-4d49-8ee2-7633d8318282 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.278138457Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=eba38c12-5dbd-4209-b6e8-9076bdc713e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.27912314Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9be6d328-fbc1-49a6-83ed-ca1f2128e5eb name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.280221569Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6d9942b5-874b-4def-a53d-1d4f1ea02829 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.28035466Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.284894305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.285115523Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c49e3ef40a2b0e491e14cbd558cf2e9620f979954422d73cf2b786fcf749dbdb/merged/etc/passwd: no such file or directory"
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.285151955Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c49e3ef40a2b0e491e14cbd558cf2e9620f979954422d73cf2b786fcf749dbdb/merged/etc/group: no such file or directory"
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.285449758Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.311094316Z" level=info msg="Created container 7da3212bbc11f7a7d14e8d97fcf2179d4aedbe928c922232c448461cf06ed14a: kube-system/storage-provisioner/storage-provisioner" id=6d9942b5-874b-4def-a53d-1d4f1ea02829 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.311797255Z" level=info msg="Starting container: 7da3212bbc11f7a7d14e8d97fcf2179d4aedbe928c922232c448461cf06ed14a" id=9e1db4e4-fafb-4bd6-84d6-6a13b184631d name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.314132933Z" level=info msg="Started container" PID=1732 containerID=7da3212bbc11f7a7d14e8d97fcf2179d4aedbe928c922232c448461cf06ed14a description=kube-system/storage-provisioner/storage-provisioner id=9e1db4e4-fafb-4bd6-84d6-6a13b184631d name=/runtime.v1.RuntimeService/StartContainer sandboxID=70f1259bd34ca143e79384eef8ab3599d833ace5408aab817ad69308ee6360b9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7da3212bbc11f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   70f1259bd34ca       storage-provisioner                          kube-system
	13ade4866a502       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   c032868ff8717       dashboard-metrics-scraper-6ffb444bf9-4pbhj   kubernetes-dashboard
	3f51be0b0b47d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   e7b7a6f20cb33       kubernetes-dashboard-855c9754f9-ntb8h        kubernetes-dashboard
	a0c546a847f92       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   16bd601894d17       busybox                                      default
	b489e2ba9e90c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   14526e5cbe68b       coredns-66bc5c9577-bgslp                     kube-system
	fa5efc086742b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   70f1259bd34ca       storage-provisioner                          kube-system
	fb2c325d870e2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   3529fb976d14f       kindnet-dlrgv                                kube-system
	bc35edff0806a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   84c1adbc0b66e       kube-proxy-7dc8p                             kube-system
	90320debdab79       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   8cc3d048b9d21       kube-controller-manager-no-preload-043790    kube-system
	19319884348b2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   e718ad027d830       etcd-no-preload-043790                       kube-system
	aef6cdacaff62       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   47c2df1b3a3a6       kube-scheduler-no-preload-043790             kube-system
	a8e01bc837509       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   5fce132cf53bc       kube-apiserver-no-preload-043790             kube-system
	
	
	==> coredns [b489e2ba9e90c267cfa27ab1c914b7469058dfffa21f018ab0528e3073136a8d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49275 - 303 "HINFO IN 6472767915351042671.8541757429697825518. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.508916624s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-043790
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-043790
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=no-preload-043790
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_09_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:09:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-043790
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:10:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:10:36 +0000   Wed, 29 Oct 2025 09:09:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:10:36 +0000   Wed, 29 Oct 2025 09:09:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:10:36 +0000   Wed, 29 Oct 2025 09:09:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:10:36 +0000   Wed, 29 Oct 2025 09:09:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-043790
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                75fc084d-43fe-4f22-be75-228a0a9d261e
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-bgslp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-no-preload-043790                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-dlrgv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-043790              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-043790     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-7dc8p                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-043790              100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4pbhj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ntb8h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node no-preload-043790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node no-preload-043790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node no-preload-043790 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node no-preload-043790 event: Registered Node no-preload-043790 in Controller
	  Normal  NodeReady                96s                kubelet          Node no-preload-043790 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node no-preload-043790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node no-preload-043790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node no-preload-043790 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node no-preload-043790 event: Registered Node no-preload-043790 in Controller
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[  +7.860471] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea d1 bc 55 fa d3 08 06
	[  +0.057230] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[  +7.379065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 86 de 12 5c b8 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[Oct29 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 3e ab f0 10 3c 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 16 ea 00 8c 1d 08 06
	[  +4.650960] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 a3 db 56 0e bd 08 06
	[  +0.000357] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[ +10.158654] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	[Oct29 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 62 44 12 a2 6e 08 06
	[  +0.000472] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	
	
	==> etcd [19319884348b2e0458cd97dc51733e41464962a2500ec16c77771a98ba4e8b27] <==
	{"level":"warn","ts":"2025-10-29T09:10:05.332788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.339205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.349941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.359246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.366957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.373678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.380724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.387922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.402235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.409938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.424131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.433367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.443900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.457216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.467851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.484656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.486515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.495111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.503107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.521120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.527890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.537497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.640096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45622","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-29T09:10:18.360201Z","caller":"traceutil/trace.go:172","msg":"trace[1618832144] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"116.021954ms","start":"2025-10-29T09:10:18.244157Z","end":"2025-10-29T09:10:18.360179Z","steps":["trace[1618832144] 'process raft request'  (duration: 115.857406ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T09:10:18.761179Z","caller":"traceutil/trace.go:172","msg":"trace[619151242] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"164.309476ms","start":"2025-10-29T09:10:18.596850Z","end":"2025-10-29T09:10:18.761159Z","steps":["trace[619151242] 'process raft request'  (duration: 108.368938ms)","trace[619151242] 'compare'  (duration: 55.838336ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:11:01 up 53 min,  0 user,  load average: 5.89, 4.35, 2.70
	Linux no-preload-043790 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fb2c325d870e27637e135d08851105d6ba6af229cdf2312e1ed2631ee749d98a] <==
	I1029 09:10:07.588817       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:10:07.589070       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1029 09:10:07.589219       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:10:07.589240       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:10:07.589265       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:10:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:10:07.880548       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:10:07.880623       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:10:07.880649       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:10:07.880840       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:10:08.185319       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:10:08.185460       1 metrics.go:72] Registering metrics
	I1029 09:10:08.185548       1 controller.go:711] "Syncing nftables rules"
	I1029 09:10:17.798372       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1029 09:10:17.798434       1 main.go:301] handling current node
	I1029 09:10:27.805203       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1029 09:10:27.805239       1 main.go:301] handling current node
	I1029 09:10:37.799330       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1029 09:10:37.799365       1 main.go:301] handling current node
	I1029 09:10:47.801405       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1029 09:10:47.801453       1 main.go:301] handling current node
	I1029 09:10:57.802633       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1029 09:10:57.802690       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a8e01bc837509e1a7e1a5c19a35ea64e574acd55a0d06c30f68441a4dc29ff7c] <==
	I1029 09:10:06.250876       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1029 09:10:06.252636       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1029 09:10:06.253222       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1029 09:10:06.258331       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1029 09:10:06.258427       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1029 09:10:06.258504       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1029 09:10:06.258592       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 09:10:06.263628       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:10:06.263650       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:10:06.263674       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:10:06.263683       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:10:06.270488       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:10:06.288591       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1029 09:10:06.294812       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:10:06.563760       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:10:06.603027       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:10:06.635500       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:10:06.647676       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:10:06.656921       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:10:06.713953       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.221.240"}
	I1029 09:10:06.747152       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.241.69"}
	I1029 09:10:07.153279       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:10:09.677093       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:10:09.827870       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:10:10.230279       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [90320debdab793d8acd7009f9643c60d73f0cb96a8b824f6fde5cdeab7a2d1c0] <==
	I1029 09:10:09.581379       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 09:10:09.581522       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1029 09:10:09.584271       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1029 09:10:09.584862       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 09:10:09.605105       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1029 09:10:09.607893       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:10:09.623799       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 09:10:09.623820       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:10:09.623848       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 09:10:09.623848       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1029 09:10:09.623912       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:10:09.623908       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1029 09:10:09.623942       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1029 09:10:09.624040       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:10:09.624241       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 09:10:09.624270       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:10:09.629342       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:10:09.635542       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1029 09:10:09.635615       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1029 09:10:09.635649       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1029 09:10:09.635659       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1029 09:10:09.635666       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1029 09:10:09.651830       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:10:10.233397       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1029 09:10:10.234106       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [bc35edff0806a2c02a9e819af40f13f2fcb050f39073754abfad9eb0cccd877a] <==
	I1029 09:10:07.484198       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:10:07.551781       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:10:07.652903       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:10:07.652965       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1029 09:10:07.653141       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:10:07.678628       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:10:07.678686       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:10:07.685152       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:10:07.685747       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:10:07.685802       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:10:07.687862       1 config.go:309] "Starting node config controller"
	I1029 09:10:07.687888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:10:07.687899       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:10:07.687937       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:10:07.687946       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:10:07.687964       1 config.go:200] "Starting service config controller"
	I1029 09:10:07.687970       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:10:07.688026       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:10:07.688033       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:10:07.788117       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:10:07.788128       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:10:07.788118       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [aef6cdacaff629417a19cf93c9fdd05bdebca3a660634d42e64d3d9b50f6be3b] <==
	I1029 09:10:04.340169       1 serving.go:386] Generated self-signed cert in-memory
	I1029 09:10:06.247291       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:10:06.247319       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:10:06.254368       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:10:06.254462       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1029 09:10:06.254547       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1029 09:10:06.254556       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:10:06.254583       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:10:06.254631       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:10:06.254653       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:10:06.254661       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:10:06.355497       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:10:06.355578       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1029 09:10:06.355689       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:10:10 no-preload-043790 kubelet[701]: I1029 09:10:10.321501     701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-266gg\" (UniqueName: \"kubernetes.io/projected/899c5d21-61f7-485e-9ee7-21097c5687fe-kube-api-access-266gg\") pod \"kubernetes-dashboard-855c9754f9-ntb8h\" (UID: \"899c5d21-61f7-485e-9ee7-21097c5687fe\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ntb8h"
	Oct 29 09:10:10 no-preload-043790 kubelet[701]: I1029 09:10:10.321530     701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/899c5d21-61f7-485e-9ee7-21097c5687fe-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-ntb8h\" (UID: \"899c5d21-61f7-485e-9ee7-21097c5687fe\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ntb8h"
	Oct 29 09:10:13 no-preload-043790 kubelet[701]: I1029 09:10:13.668589     701 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 29 09:10:14 no-preload-043790 kubelet[701]: I1029 09:10:14.191056     701 scope.go:117] "RemoveContainer" containerID="379b3099e1b007ea9f5ec9b96157ff10ba01733376f1d9c7d8ac0a8219712af6"
	Oct 29 09:10:15 no-preload-043790 kubelet[701]: I1029 09:10:15.205027     701 scope.go:117] "RemoveContainer" containerID="379b3099e1b007ea9f5ec9b96157ff10ba01733376f1d9c7d8ac0a8219712af6"
	Oct 29 09:10:15 no-preload-043790 kubelet[701]: I1029 09:10:15.205839     701 scope.go:117] "RemoveContainer" containerID="06742da1e8539d79ee0a2276f40dc700a9d214090f8eaac91d7d819589e3eefc"
	Oct 29 09:10:15 no-preload-043790 kubelet[701]: E1029 09:10:15.206084     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4pbhj_kubernetes-dashboard(036208ee-21da-4f6e-885d-3842b10ddff7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj" podUID="036208ee-21da-4f6e-885d-3842b10ddff7"
	Oct 29 09:10:16 no-preload-043790 kubelet[701]: I1029 09:10:16.215311     701 scope.go:117] "RemoveContainer" containerID="06742da1e8539d79ee0a2276f40dc700a9d214090f8eaac91d7d819589e3eefc"
	Oct 29 09:10:16 no-preload-043790 kubelet[701]: E1029 09:10:16.215601     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4pbhj_kubernetes-dashboard(036208ee-21da-4f6e-885d-3842b10ddff7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj" podUID="036208ee-21da-4f6e-885d-3842b10ddff7"
	Oct 29 09:10:20 no-preload-043790 kubelet[701]: I1029 09:10:20.343980     701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ntb8h" podStartSLOduration=1.5601317479999999 podStartE2EDuration="10.343962029s" podCreationTimestamp="2025-10-29 09:10:10 +0000 UTC" firstStartedPulling="2025-10-29 09:10:10.543317098 +0000 UTC m=+7.554700530" lastFinishedPulling="2025-10-29 09:10:19.327147361 +0000 UTC m=+16.338530811" observedRunningTime="2025-10-29 09:10:20.242387274 +0000 UTC m=+17.253770705" watchObservedRunningTime="2025-10-29 09:10:20.343962029 +0000 UTC m=+17.355345480"
	Oct 29 09:10:23 no-preload-043790 kubelet[701]: I1029 09:10:23.955377     701 scope.go:117] "RemoveContainer" containerID="06742da1e8539d79ee0a2276f40dc700a9d214090f8eaac91d7d819589e3eefc"
	Oct 29 09:10:23 no-preload-043790 kubelet[701]: E1029 09:10:23.955602     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4pbhj_kubernetes-dashboard(036208ee-21da-4f6e-885d-3842b10ddff7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj" podUID="036208ee-21da-4f6e-885d-3842b10ddff7"
	Oct 29 09:10:37 no-preload-043790 kubelet[701]: I1029 09:10:37.105672     701 scope.go:117] "RemoveContainer" containerID="06742da1e8539d79ee0a2276f40dc700a9d214090f8eaac91d7d819589e3eefc"
	Oct 29 09:10:37 no-preload-043790 kubelet[701]: I1029 09:10:37.271687     701 scope.go:117] "RemoveContainer" containerID="06742da1e8539d79ee0a2276f40dc700a9d214090f8eaac91d7d819589e3eefc"
	Oct 29 09:10:37 no-preload-043790 kubelet[701]: I1029 09:10:37.271914     701 scope.go:117] "RemoveContainer" containerID="13ade4866a502df7776458724863b98ddc6e6380140bc43a0e27fc169685353d"
	Oct 29 09:10:37 no-preload-043790 kubelet[701]: E1029 09:10:37.272177     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4pbhj_kubernetes-dashboard(036208ee-21da-4f6e-885d-3842b10ddff7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj" podUID="036208ee-21da-4f6e-885d-3842b10ddff7"
	Oct 29 09:10:38 no-preload-043790 kubelet[701]: I1029 09:10:38.277751     701 scope.go:117] "RemoveContainer" containerID="fa5efc086742b530fe5144b57cb128e0ddfb74ce8e5e2c6464551027e860b71e"
	Oct 29 09:10:43 no-preload-043790 kubelet[701]: I1029 09:10:43.955463     701 scope.go:117] "RemoveContainer" containerID="13ade4866a502df7776458724863b98ddc6e6380140bc43a0e27fc169685353d"
	Oct 29 09:10:43 no-preload-043790 kubelet[701]: E1029 09:10:43.955723     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4pbhj_kubernetes-dashboard(036208ee-21da-4f6e-885d-3842b10ddff7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj" podUID="036208ee-21da-4f6e-885d-3842b10ddff7"
	Oct 29 09:10:57 no-preload-043790 kubelet[701]: I1029 09:10:57.107088     701 scope.go:117] "RemoveContainer" containerID="13ade4866a502df7776458724863b98ddc6e6380140bc43a0e27fc169685353d"
	Oct 29 09:10:57 no-preload-043790 kubelet[701]: E1029 09:10:57.107252     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4pbhj_kubernetes-dashboard(036208ee-21da-4f6e-885d-3842b10ddff7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj" podUID="036208ee-21da-4f6e-885d-3842b10ddff7"
	Oct 29 09:10:58 no-preload-043790 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:10:58 no-preload-043790 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:10:58 no-preload-043790 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 29 09:10:58 no-preload-043790 systemd[1]: kubelet.service: Consumed 1.849s CPU time.
	
	
	==> kubernetes-dashboard [3f51be0b0b47d1c0f31d34438f6201f85142e47f56441e53f6dfcd7ce23b9531] <==
	2025/10/29 09:10:19 Using namespace: kubernetes-dashboard
	2025/10/29 09:10:19 Using in-cluster config to connect to apiserver
	2025/10/29 09:10:19 Using secret token for csrf signing
	2025/10/29 09:10:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 09:10:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 09:10:19 Successful initial request to the apiserver, version: v1.34.1
	2025/10/29 09:10:19 Generating JWE encryption key
	2025/10/29 09:10:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 09:10:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 09:10:19 Initializing JWE encryption key from synchronized object
	2025/10/29 09:10:19 Creating in-cluster Sidecar client
	2025/10/29 09:10:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:10:19 Serving insecurely on HTTP port: 9090
	2025/10/29 09:10:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:10:19 Starting overwatch
	
	
	==> storage-provisioner [7da3212bbc11f7a7d14e8d97fcf2179d4aedbe928c922232c448461cf06ed14a] <==
	I1029 09:10:38.327092       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:10:38.335751       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:10:38.335796       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:10:38.338177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:41.794269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:46.054694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:49.654343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:52.708269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:55.730637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:55.738027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:10:55.738212       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:10:55.738410       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-043790_8c7ba760-98db-4f8e-998e-e56b9b5aed97!
	I1029 09:10:55.738411       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2f909cb5-1c5e-4bfe-af9b-4b8cebee1396", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-043790_8c7ba760-98db-4f8e-998e-e56b9b5aed97 became leader
	W1029 09:10:55.740535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:55.743647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:10:55.839278       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-043790_8c7ba760-98db-4f8e-998e-e56b9b5aed97!
	W1029 09:10:57.747276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:57.752050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:59.756828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:59.765644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fa5efc086742b530fe5144b57cb128e0ddfb74ce8e5e2c6464551027e860b71e] <==
	I1029 09:10:07.463291       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1029 09:10:37.468520       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-043790 -n no-preload-043790
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-043790 -n no-preload-043790: exit status 2 (368.413243ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-043790 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-043790
helpers_test.go:243: (dbg) docker inspect no-preload-043790:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7",
	        "Created": "2025-10-29T09:08:34.171867381Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 311173,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:09:56.426043668Z",
	            "FinishedAt": "2025-10-29T09:09:55.43221818Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7/hostname",
	        "HostsPath": "/var/lib/docker/containers/b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7/hosts",
	        "LogPath": "/var/lib/docker/containers/b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7/b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7-json.log",
	        "Name": "/no-preload-043790",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-043790:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-043790",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b2e7560bb45a4d1bf216c43b401080cbf6f072e415c25123cffadd030d4b2cb7",
	                "LowerDir": "/var/lib/docker/overlay2/58d942a653991abae69a0bdf6841492fca5fc3fd6fabad6f0db77f0268252ce7-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58d942a653991abae69a0bdf6841492fca5fc3fd6fabad6f0db77f0268252ce7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58d942a653991abae69a0bdf6841492fca5fc3fd6fabad6f0db77f0268252ce7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58d942a653991abae69a0bdf6841492fca5fc3fd6fabad6f0db77f0268252ce7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-043790",
	                "Source": "/var/lib/docker/volumes/no-preload-043790/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-043790",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-043790",
	                "name.minikube.sigs.k8s.io": "no-preload-043790",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ebd67f4ca8d98b74e11039a34df9f6e928ce01fa8bc581c62f58aa1d3bc00ba7",
	            "SandboxKey": "/var/run/docker/netns/ebd67f4ca8d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-043790": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:30:b7:81:9a:5e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dcc575c7384eba10361bed7adc7ddf8a9bfff63d366b63895fcc568dd1c4ba1d",
	                    "EndpointID": "c59585f41570b7cb01866977b22e03dc1be04d5fd634196f8df0ec2bbf1a3424",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-043790",
	                        "b2e7560bb45a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-043790 -n no-preload-043790
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-043790 -n no-preload-043790: exit status 2 (379.203319ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-043790 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-043790 logs -n 25: (3.194789556s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p old-k8s-version-096492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p old-k8s-version-096492 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ addons  │ enable metrics-server -p embed-certs-834228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-043790 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │                     │
	│ stop    │ -p embed-certs-834228 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ stop    │ -p no-preload-043790 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-096492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p old-k8s-version-096492 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p embed-certs-834228 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p embed-certs-834228 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p no-preload-043790 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p no-preload-043790 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-017274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-017274 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-017274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ image   │ old-k8s-version-096492 image list --format=json                                                                                                                                                                                               │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p old-k8s-version-096492 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ image   │ embed-certs-834228 image list --format=json                                                                                                                                                                                                   │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p embed-certs-834228 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ image   │ no-preload-043790 image list --format=json                                                                                                                                                                                                    │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p no-preload-043790 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p newest-cni-259430 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:10:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:10:59.394267  323285 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:10:59.394622  323285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:59.394635  323285 out.go:374] Setting ErrFile to fd 2...
	I1029 09:10:59.394640  323285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:59.394949  323285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:10:59.395669  323285 out.go:368] Setting JSON to false
	I1029 09:10:59.397426  323285 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3207,"bootTime":1761725852,"procs":353,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:10:59.397490  323285 start.go:143] virtualization: kvm guest
	I1029 09:10:59.399709  323285 out.go:179] * [newest-cni-259430] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:10:59.401275  323285 notify.go:221] Checking for updates...
	I1029 09:10:59.401303  323285 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:10:59.402811  323285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:10:59.404227  323285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:59.405575  323285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:10:59.406888  323285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:10:59.408222  323285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:10:59.410015  323285 config.go:182] Loaded profile config "default-k8s-diff-port-017274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410148  323285 config.go:182] Loaded profile config "embed-certs-834228": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410263  323285 config.go:182] Loaded profile config "no-preload-043790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410378  323285 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:10:59.435730  323285 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:10:59.435827  323285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:59.503060  323285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-29 09:10:59.489541208 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:59.503182  323285 docker.go:319] overlay module found
	I1029 09:10:59.505269  323285 out.go:179] * Using the docker driver based on user configuration
	I1029 09:10:59.506723  323285 start.go:309] selected driver: docker
	I1029 09:10:59.506741  323285 start.go:930] validating driver "docker" against <nil>
	I1029 09:10:59.506755  323285 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:10:59.507436  323285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:59.587780  323285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-29 09:10:59.571693978 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:59.588075  323285 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1029 09:10:59.588122  323285 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1029 09:10:59.588720  323285 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:10:59.590863  323285 out.go:179] * Using Docker driver with root privileges
	I1029 09:10:59.592506  323285 cni.go:84] Creating CNI manager for ""
	I1029 09:10:59.592592  323285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:10:59.592606  323285 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:10:59.592730  323285 start.go:353] cluster config:
	{Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:10:59.594390  323285 out.go:179] * Starting "newest-cni-259430" primary control-plane node in "newest-cni-259430" cluster
	I1029 09:10:59.595763  323285 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:10:59.597231  323285 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:10:59.598574  323285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:10:59.598631  323285 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:10:59.598649  323285 cache.go:59] Caching tarball of preloaded images
	I1029 09:10:59.598672  323285 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:10:59.598768  323285 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:10:59.598779  323285 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:10:59.598919  323285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json ...
	I1029 09:10:59.598949  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json: {Name:mked5dfa4485c424df381c0f3cdc9d7d7ae817f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:59.625501  323285 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:10:59.625521  323285 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:10:59.625543  323285 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:10:59.625570  323285 start.go:360] acquireMachinesLock for newest-cni-259430: {Name:mk9f7a4924e0dc30dd9007c8d213cb8c4076ee8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:10:59.625670  323285 start.go:364] duration metric: took 83.48µs to acquireMachinesLock for "newest-cni-259430"
	I1029 09:10:59.625695  323285 start.go:93] Provisioning new machine with config: &{Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:10:59.625758  323285 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 29 09:10:19 no-preload-043790 crio[559]: time="2025-10-29T09:10:19.413974647Z" level=info msg="Created container 3f51be0b0b47d1c0f31d34438f6201f85142e47f56441e53f6dfcd7ce23b9531: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ntb8h/kubernetes-dashboard" id=0d323533-48f2-4362-8064-7e24cbfbd3ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:19 no-preload-043790 crio[559]: time="2025-10-29T09:10:19.41473379Z" level=info msg="Starting container: 3f51be0b0b47d1c0f31d34438f6201f85142e47f56441e53f6dfcd7ce23b9531" id=2e40b815-e362-471a-814e-8332f5406d87 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:10:19 no-preload-043790 crio[559]: time="2025-10-29T09:10:19.417287848Z" level=info msg="Started container" PID=1700 containerID=3f51be0b0b47d1c0f31d34438f6201f85142e47f56441e53f6dfcd7ce23b9531 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ntb8h/kubernetes-dashboard id=2e40b815-e362-471a-814e-8332f5406d87 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e7b7a6f20cb3378b3f60d07a9e801dadb2e73975d116a4dfbffcd4a1ad09cce6
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.106172149Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=98188917-e07b-40f9-9106-6d2ac8edd18e name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.107086923Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=677047a2-795d-43d8-ba44-5e0e6cfb544a name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.108177862Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj/dashboard-metrics-scraper" id=1c725c58-dd66-4f17-9877-14835aae14a0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.108316146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.114110586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.114878748Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.145448909Z" level=info msg="Created container 13ade4866a502df7776458724863b98ddc6e6380140bc43a0e27fc169685353d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj/dashboard-metrics-scraper" id=1c725c58-dd66-4f17-9877-14835aae14a0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.146091289Z" level=info msg="Starting container: 13ade4866a502df7776458724863b98ddc6e6380140bc43a0e27fc169685353d" id=70cff73c-366e-4e15-a3ea-6ae1aac1eb33 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.148410719Z" level=info msg="Started container" PID=1718 containerID=13ade4866a502df7776458724863b98ddc6e6380140bc43a0e27fc169685353d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj/dashboard-metrics-scraper id=70cff73c-366e-4e15-a3ea-6ae1aac1eb33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c032868ff8717e79b48e1515e3cc6e9317e0fdae4c2c883c7a456e6523d68500
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.273185933Z" level=info msg="Removing container: 06742da1e8539d79ee0a2276f40dc700a9d214090f8eaac91d7d819589e3eefc" id=897fb620-ea15-4d49-8ee2-7633d8318282 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:37 no-preload-043790 crio[559]: time="2025-10-29T09:10:37.28380213Z" level=info msg="Removed container 06742da1e8539d79ee0a2276f40dc700a9d214090f8eaac91d7d819589e3eefc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj/dashboard-metrics-scraper" id=897fb620-ea15-4d49-8ee2-7633d8318282 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.278138457Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=eba38c12-5dbd-4209-b6e8-9076bdc713e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.27912314Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9be6d328-fbc1-49a6-83ed-ca1f2128e5eb name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.280221569Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6d9942b5-874b-4def-a53d-1d4f1ea02829 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.28035466Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.284894305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.285115523Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c49e3ef40a2b0e491e14cbd558cf2e9620f979954422d73cf2b786fcf749dbdb/merged/etc/passwd: no such file or directory"
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.285151955Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c49e3ef40a2b0e491e14cbd558cf2e9620f979954422d73cf2b786fcf749dbdb/merged/etc/group: no such file or directory"
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.285449758Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.311094316Z" level=info msg="Created container 7da3212bbc11f7a7d14e8d97fcf2179d4aedbe928c922232c448461cf06ed14a: kube-system/storage-provisioner/storage-provisioner" id=6d9942b5-874b-4def-a53d-1d4f1ea02829 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.311797255Z" level=info msg="Starting container: 7da3212bbc11f7a7d14e8d97fcf2179d4aedbe928c922232c448461cf06ed14a" id=9e1db4e4-fafb-4bd6-84d6-6a13b184631d name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:10:38 no-preload-043790 crio[559]: time="2025-10-29T09:10:38.314132933Z" level=info msg="Started container" PID=1732 containerID=7da3212bbc11f7a7d14e8d97fcf2179d4aedbe928c922232c448461cf06ed14a description=kube-system/storage-provisioner/storage-provisioner id=9e1db4e4-fafb-4bd6-84d6-6a13b184631d name=/runtime.v1.RuntimeService/StartContainer sandboxID=70f1259bd34ca143e79384eef8ab3599d833ace5408aab817ad69308ee6360b9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7da3212bbc11f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   70f1259bd34ca       storage-provisioner                          kube-system
	13ade4866a502       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   c032868ff8717       dashboard-metrics-scraper-6ffb444bf9-4pbhj   kubernetes-dashboard
	3f51be0b0b47d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   e7b7a6f20cb33       kubernetes-dashboard-855c9754f9-ntb8h        kubernetes-dashboard
	a0c546a847f92       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   16bd601894d17       busybox                                      default
	b489e2ba9e90c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   14526e5cbe68b       coredns-66bc5c9577-bgslp                     kube-system
	fa5efc086742b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   70f1259bd34ca       storage-provisioner                          kube-system
	fb2c325d870e2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   3529fb976d14f       kindnet-dlrgv                                kube-system
	bc35edff0806a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   84c1adbc0b66e       kube-proxy-7dc8p                             kube-system
	90320debdab79       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   8cc3d048b9d21       kube-controller-manager-no-preload-043790    kube-system
	19319884348b2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   e718ad027d830       etcd-no-preload-043790                       kube-system
	aef6cdacaff62       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   47c2df1b3a3a6       kube-scheduler-no-preload-043790             kube-system
	a8e01bc837509       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   5fce132cf53bc       kube-apiserver-no-preload-043790             kube-system
	
	
	==> coredns [b489e2ba9e90c267cfa27ab1c914b7469058dfffa21f018ab0528e3073136a8d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49275 - 303 "HINFO IN 6472767915351042671.8541757429697825518. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.508916624s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-043790
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-043790
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=no-preload-043790
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_09_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:09:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-043790
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:10:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:10:36 +0000   Wed, 29 Oct 2025 09:09:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:10:36 +0000   Wed, 29 Oct 2025 09:09:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:10:36 +0000   Wed, 29 Oct 2025 09:09:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:10:36 +0000   Wed, 29 Oct 2025 09:09:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-043790
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                75fc084d-43fe-4f22-be75-228a0a9d261e
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-bgslp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-no-preload-043790                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-dlrgv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-no-preload-043790              250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-043790     200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-7dc8p                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-no-preload-043790              100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4pbhj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ntb8h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 112s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node no-preload-043790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node no-preload-043790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s               kubelet          Node no-preload-043790 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s               node-controller  Node no-preload-043790 event: Registered Node no-preload-043790 in Controller
	  Normal  NodeReady                99s                kubelet          Node no-preload-043790 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node no-preload-043790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node no-preload-043790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node no-preload-043790 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                node-controller  Node no-preload-043790 event: Registered Node no-preload-043790 in Controller
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[  +7.860471] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea d1 bc 55 fa d3 08 06
	[  +0.057230] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[  +7.379065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 86 de 12 5c b8 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[Oct29 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 3e ab f0 10 3c 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 16 ea 00 8c 1d 08 06
	[  +4.650960] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 a3 db 56 0e bd 08 06
	[  +0.000357] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[ +10.158654] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	[Oct29 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 62 44 12 a2 6e 08 06
	[  +0.000472] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	
	
	==> etcd [19319884348b2e0458cd97dc51733e41464962a2500ec16c77771a98ba4e8b27] <==
	{"level":"warn","ts":"2025-10-29T09:10:05.332788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.339205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.349941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.359246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.366957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.373678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.380724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.387922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.402235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.409938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.424131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.433367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.443900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.457216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.467851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.484656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.486515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.495111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.503107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.521120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.527890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.537497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:05.640096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45622","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-29T09:10:18.360201Z","caller":"traceutil/trace.go:172","msg":"trace[1618832144] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"116.021954ms","start":"2025-10-29T09:10:18.244157Z","end":"2025-10-29T09:10:18.360179Z","steps":["trace[1618832144] 'process raft request'  (duration: 115.857406ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T09:10:18.761179Z","caller":"traceutil/trace.go:172","msg":"trace[619151242] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"164.309476ms","start":"2025-10-29T09:10:18.596850Z","end":"2025-10-29T09:10:18.761159Z","steps":["trace[619151242] 'process raft request'  (duration: 108.368938ms)","trace[619151242] 'compare'  (duration: 55.838336ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:11:05 up 53 min,  0 user,  load average: 5.90, 4.38, 2.72
	Linux no-preload-043790 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fb2c325d870e27637e135d08851105d6ba6af229cdf2312e1ed2631ee749d98a] <==
	I1029 09:10:07.588817       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:10:07.589070       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1029 09:10:07.589219       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:10:07.589240       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:10:07.589265       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:10:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:10:07.880548       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:10:07.880623       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:10:07.880649       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:10:07.880840       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:10:08.185319       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:10:08.185460       1 metrics.go:72] Registering metrics
	I1029 09:10:08.185548       1 controller.go:711] "Syncing nftables rules"
	I1029 09:10:17.798372       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1029 09:10:17.798434       1 main.go:301] handling current node
	I1029 09:10:27.805203       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1029 09:10:27.805239       1 main.go:301] handling current node
	I1029 09:10:37.799330       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1029 09:10:37.799365       1 main.go:301] handling current node
	I1029 09:10:47.801405       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1029 09:10:47.801453       1 main.go:301] handling current node
	I1029 09:10:57.802633       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1029 09:10:57.802690       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a8e01bc837509e1a7e1a5c19a35ea64e574acd55a0d06c30f68441a4dc29ff7c] <==
	I1029 09:10:06.250876       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1029 09:10:06.252636       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1029 09:10:06.253222       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1029 09:10:06.258331       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1029 09:10:06.258427       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1029 09:10:06.258504       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1029 09:10:06.258592       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 09:10:06.263628       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:10:06.263650       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:10:06.263674       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:10:06.263683       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:10:06.270488       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:10:06.288591       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1029 09:10:06.294812       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:10:06.563760       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:10:06.603027       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:10:06.635500       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:10:06.647676       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:10:06.656921       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:10:06.713953       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.221.240"}
	I1029 09:10:06.747152       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.241.69"}
	I1029 09:10:07.153279       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:10:09.677093       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:10:09.827870       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:10:10.230279       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [90320debdab793d8acd7009f9643c60d73f0cb96a8b824f6fde5cdeab7a2d1c0] <==
	I1029 09:10:09.581379       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 09:10:09.581522       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1029 09:10:09.584271       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1029 09:10:09.584862       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 09:10:09.605105       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1029 09:10:09.607893       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:10:09.623799       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 09:10:09.623820       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:10:09.623848       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 09:10:09.623848       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1029 09:10:09.623912       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:10:09.623908       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1029 09:10:09.623942       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1029 09:10:09.624040       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:10:09.624241       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 09:10:09.624270       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:10:09.629342       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:10:09.635542       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1029 09:10:09.635615       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1029 09:10:09.635649       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1029 09:10:09.635659       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1029 09:10:09.635666       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1029 09:10:09.651830       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:10:10.233397       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1029 09:10:10.234106       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [bc35edff0806a2c02a9e819af40f13f2fcb050f39073754abfad9eb0cccd877a] <==
	I1029 09:10:07.484198       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:10:07.551781       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:10:07.652903       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:10:07.652965       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1029 09:10:07.653141       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:10:07.678628       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:10:07.678686       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:10:07.685152       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:10:07.685747       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:10:07.685802       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:10:07.687862       1 config.go:309] "Starting node config controller"
	I1029 09:10:07.687888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:10:07.687899       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:10:07.687937       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:10:07.687946       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:10:07.687964       1 config.go:200] "Starting service config controller"
	I1029 09:10:07.687970       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:10:07.688026       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:10:07.688033       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:10:07.788117       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:10:07.788128       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:10:07.788118       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [aef6cdacaff629417a19cf93c9fdd05bdebca3a660634d42e64d3d9b50f6be3b] <==
	I1029 09:10:04.340169       1 serving.go:386] Generated self-signed cert in-memory
	I1029 09:10:06.247291       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:10:06.247319       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:10:06.254368       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:10:06.254462       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1029 09:10:06.254547       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1029 09:10:06.254556       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:10:06.254583       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:10:06.254631       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:10:06.254653       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:10:06.254661       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:10:06.355497       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:10:06.355578       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1029 09:10:06.355689       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:10:10 no-preload-043790 kubelet[701]: I1029 09:10:10.321501     701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-266gg\" (UniqueName: \"kubernetes.io/projected/899c5d21-61f7-485e-9ee7-21097c5687fe-kube-api-access-266gg\") pod \"kubernetes-dashboard-855c9754f9-ntb8h\" (UID: \"899c5d21-61f7-485e-9ee7-21097c5687fe\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ntb8h"
	Oct 29 09:10:10 no-preload-043790 kubelet[701]: I1029 09:10:10.321530     701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/899c5d21-61f7-485e-9ee7-21097c5687fe-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-ntb8h\" (UID: \"899c5d21-61f7-485e-9ee7-21097c5687fe\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ntb8h"
	Oct 29 09:10:13 no-preload-043790 kubelet[701]: I1029 09:10:13.668589     701 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 29 09:10:14 no-preload-043790 kubelet[701]: I1029 09:10:14.191056     701 scope.go:117] "RemoveContainer" containerID="379b3099e1b007ea9f5ec9b96157ff10ba01733376f1d9c7d8ac0a8219712af6"
	Oct 29 09:10:15 no-preload-043790 kubelet[701]: I1029 09:10:15.205027     701 scope.go:117] "RemoveContainer" containerID="379b3099e1b007ea9f5ec9b96157ff10ba01733376f1d9c7d8ac0a8219712af6"
	Oct 29 09:10:15 no-preload-043790 kubelet[701]: I1029 09:10:15.205839     701 scope.go:117] "RemoveContainer" containerID="06742da1e8539d79ee0a2276f40dc700a9d214090f8eaac91d7d819589e3eefc"
	Oct 29 09:10:15 no-preload-043790 kubelet[701]: E1029 09:10:15.206084     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4pbhj_kubernetes-dashboard(036208ee-21da-4f6e-885d-3842b10ddff7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj" podUID="036208ee-21da-4f6e-885d-3842b10ddff7"
	Oct 29 09:10:16 no-preload-043790 kubelet[701]: I1029 09:10:16.215311     701 scope.go:117] "RemoveContainer" containerID="06742da1e8539d79ee0a2276f40dc700a9d214090f8eaac91d7d819589e3eefc"
	Oct 29 09:10:16 no-preload-043790 kubelet[701]: E1029 09:10:16.215601     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4pbhj_kubernetes-dashboard(036208ee-21da-4f6e-885d-3842b10ddff7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj" podUID="036208ee-21da-4f6e-885d-3842b10ddff7"
	Oct 29 09:10:20 no-preload-043790 kubelet[701]: I1029 09:10:20.343980     701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ntb8h" podStartSLOduration=1.5601317479999999 podStartE2EDuration="10.343962029s" podCreationTimestamp="2025-10-29 09:10:10 +0000 UTC" firstStartedPulling="2025-10-29 09:10:10.543317098 +0000 UTC m=+7.554700530" lastFinishedPulling="2025-10-29 09:10:19.327147361 +0000 UTC m=+16.338530811" observedRunningTime="2025-10-29 09:10:20.242387274 +0000 UTC m=+17.253770705" watchObservedRunningTime="2025-10-29 09:10:20.343962029 +0000 UTC m=+17.355345480"
	Oct 29 09:10:23 no-preload-043790 kubelet[701]: I1029 09:10:23.955377     701 scope.go:117] "RemoveContainer" containerID="06742da1e8539d79ee0a2276f40dc700a9d214090f8eaac91d7d819589e3eefc"
	Oct 29 09:10:23 no-preload-043790 kubelet[701]: E1029 09:10:23.955602     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4pbhj_kubernetes-dashboard(036208ee-21da-4f6e-885d-3842b10ddff7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj" podUID="036208ee-21da-4f6e-885d-3842b10ddff7"
	Oct 29 09:10:37 no-preload-043790 kubelet[701]: I1029 09:10:37.105672     701 scope.go:117] "RemoveContainer" containerID="06742da1e8539d79ee0a2276f40dc700a9d214090f8eaac91d7d819589e3eefc"
	Oct 29 09:10:37 no-preload-043790 kubelet[701]: I1029 09:10:37.271687     701 scope.go:117] "RemoveContainer" containerID="06742da1e8539d79ee0a2276f40dc700a9d214090f8eaac91d7d819589e3eefc"
	Oct 29 09:10:37 no-preload-043790 kubelet[701]: I1029 09:10:37.271914     701 scope.go:117] "RemoveContainer" containerID="13ade4866a502df7776458724863b98ddc6e6380140bc43a0e27fc169685353d"
	Oct 29 09:10:37 no-preload-043790 kubelet[701]: E1029 09:10:37.272177     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4pbhj_kubernetes-dashboard(036208ee-21da-4f6e-885d-3842b10ddff7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj" podUID="036208ee-21da-4f6e-885d-3842b10ddff7"
	Oct 29 09:10:38 no-preload-043790 kubelet[701]: I1029 09:10:38.277751     701 scope.go:117] "RemoveContainer" containerID="fa5efc086742b530fe5144b57cb128e0ddfb74ce8e5e2c6464551027e860b71e"
	Oct 29 09:10:43 no-preload-043790 kubelet[701]: I1029 09:10:43.955463     701 scope.go:117] "RemoveContainer" containerID="13ade4866a502df7776458724863b98ddc6e6380140bc43a0e27fc169685353d"
	Oct 29 09:10:43 no-preload-043790 kubelet[701]: E1029 09:10:43.955723     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4pbhj_kubernetes-dashboard(036208ee-21da-4f6e-885d-3842b10ddff7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj" podUID="036208ee-21da-4f6e-885d-3842b10ddff7"
	Oct 29 09:10:57 no-preload-043790 kubelet[701]: I1029 09:10:57.107088     701 scope.go:117] "RemoveContainer" containerID="13ade4866a502df7776458724863b98ddc6e6380140bc43a0e27fc169685353d"
	Oct 29 09:10:57 no-preload-043790 kubelet[701]: E1029 09:10:57.107252     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4pbhj_kubernetes-dashboard(036208ee-21da-4f6e-885d-3842b10ddff7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4pbhj" podUID="036208ee-21da-4f6e-885d-3842b10ddff7"
	Oct 29 09:10:58 no-preload-043790 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:10:58 no-preload-043790 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:10:58 no-preload-043790 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 29 09:10:58 no-preload-043790 systemd[1]: kubelet.service: Consumed 1.849s CPU time.
	
	
	==> kubernetes-dashboard [3f51be0b0b47d1c0f31d34438f6201f85142e47f56441e53f6dfcd7ce23b9531] <==
	2025/10/29 09:10:19 Starting overwatch
	2025/10/29 09:10:19 Using namespace: kubernetes-dashboard
	2025/10/29 09:10:19 Using in-cluster config to connect to apiserver
	2025/10/29 09:10:19 Using secret token for csrf signing
	2025/10/29 09:10:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 09:10:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 09:10:19 Successful initial request to the apiserver, version: v1.34.1
	2025/10/29 09:10:19 Generating JWE encryption key
	2025/10/29 09:10:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 09:10:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 09:10:19 Initializing JWE encryption key from synchronized object
	2025/10/29 09:10:19 Creating in-cluster Sidecar client
	2025/10/29 09:10:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:10:19 Serving insecurely on HTTP port: 9090
	2025/10/29 09:10:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [7da3212bbc11f7a7d14e8d97fcf2179d4aedbe928c922232c448461cf06ed14a] <==
	I1029 09:10:38.327092       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:10:38.335751       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:10:38.335796       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:10:38.338177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:41.794269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:46.054694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:49.654343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:52.708269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:55.730637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:55.738027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:10:55.738212       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:10:55.738410       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-043790_8c7ba760-98db-4f8e-998e-e56b9b5aed97!
	I1029 09:10:55.738411       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2f909cb5-1c5e-4bfe-af9b-4b8cebee1396", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-043790_8c7ba760-98db-4f8e-998e-e56b9b5aed97 became leader
	W1029 09:10:55.740535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:55.743647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:10:55.839278       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-043790_8c7ba760-98db-4f8e-998e-e56b9b5aed97!
	W1029 09:10:57.747276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:57.752050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:59.756828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:10:59.765644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:01.768804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:01.773626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:03.777116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:03.871865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fa5efc086742b530fe5144b57cb128e0ddfb74ce8e5e2c6464551027e860b71e] <==
	I1029 09:10:07.463291       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1029 09:10:37.468520       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-043790 -n no-preload-043790
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-043790 -n no-preload-043790: exit status 2 (374.313381ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-043790 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (8.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-259430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-259430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (263.37661ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:11:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-259430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-259430
helpers_test.go:243: (dbg) docker inspect newest-cni-259430:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb",
	        "Created": "2025-10-29T09:11:05.338331033Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 326258,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:11:05.381020558Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb/hostname",
	        "HostsPath": "/var/lib/docker/containers/898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb/hosts",
	        "LogPath": "/var/lib/docker/containers/898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb/898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb-json.log",
	        "Name": "/newest-cni-259430",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-259430:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-259430",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb",
	                "LowerDir": "/var/lib/docker/overlay2/d4dfbd09fa5e0cf3f5f114acf8641b739db6281f40165e806f5f59b8b1f6d1fb-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d4dfbd09fa5e0cf3f5f114acf8641b739db6281f40165e806f5f59b8b1f6d1fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d4dfbd09fa5e0cf3f5f114acf8641b739db6281f40165e806f5f59b8b1f6d1fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d4dfbd09fa5e0cf3f5f114acf8641b739db6281f40165e806f5f59b8b1f6d1fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-259430",
	                "Source": "/var/lib/docker/volumes/newest-cni-259430/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-259430",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-259430",
	                "name.minikube.sigs.k8s.io": "newest-cni-259430",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3b982c5e28270bfb3ae68b21da9fb12fdcde953421a3c19dfb118ba77f8c9374",
	            "SandboxKey": "/var/run/docker/netns/3b982c5e2827",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-259430": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:50:d0:c9:1f:50",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "52c784c79ded45742c986e67d511ad367789016db5dda3c8e7a6f446705f967c",
	                    "EndpointID": "f1986466669b8a629f66c13edd7da7d97b0268a98f0e7830d929f1c48040c285",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-259430",
	                        "898af032bdf9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-259430 -n newest-cni-259430
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-259430 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-043790 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-096492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p old-k8s-version-096492 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p embed-certs-834228 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p embed-certs-834228 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p no-preload-043790 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p no-preload-043790 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-017274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-017274 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-017274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:11 UTC │
	│ image   │ old-k8s-version-096492 image list --format=json                                                                                                                                                                                               │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p old-k8s-version-096492 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ image   │ embed-certs-834228 image list --format=json                                                                                                                                                                                                   │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p embed-certs-834228 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ image   │ no-preload-043790 image list --format=json                                                                                                                                                                                                    │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p no-preload-043790 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p newest-cni-259430 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p embed-certs-834228                                                                                                                                                                                                                         │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p no-preload-043790                                                                                                                                                                                                                          │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p embed-certs-834228                                                                                                                                                                                                                         │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p no-preload-043790                                                                                                                                                                                                                          │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ addons  │ enable metrics-server -p newest-cni-259430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:10:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:10:59.394267  323285 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:10:59.394622  323285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:59.394635  323285 out.go:374] Setting ErrFile to fd 2...
	I1029 09:10:59.394640  323285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:59.394949  323285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:10:59.395669  323285 out.go:368] Setting JSON to false
	I1029 09:10:59.397426  323285 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3207,"bootTime":1761725852,"procs":353,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:10:59.397490  323285 start.go:143] virtualization: kvm guest
	I1029 09:10:59.399709  323285 out.go:179] * [newest-cni-259430] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:10:59.401275  323285 notify.go:221] Checking for updates...
	I1029 09:10:59.401303  323285 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:10:59.402811  323285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:10:59.404227  323285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:59.405575  323285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:10:59.406888  323285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:10:59.408222  323285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:10:59.410015  323285 config.go:182] Loaded profile config "default-k8s-diff-port-017274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410148  323285 config.go:182] Loaded profile config "embed-certs-834228": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410263  323285 config.go:182] Loaded profile config "no-preload-043790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410378  323285 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:10:59.435730  323285 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:10:59.435827  323285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:59.503060  323285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-29 09:10:59.489541208 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:59.503182  323285 docker.go:319] overlay module found
	I1029 09:10:59.505269  323285 out.go:179] * Using the docker driver based on user configuration
	I1029 09:10:59.506723  323285 start.go:309] selected driver: docker
	I1029 09:10:59.506741  323285 start.go:930] validating driver "docker" against <nil>
	I1029 09:10:59.506755  323285 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:10:59.507436  323285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:59.587780  323285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-29 09:10:59.571693978 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:59.588075  323285 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1029 09:10:59.588122  323285 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1029 09:10:59.588720  323285 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:10:59.590863  323285 out.go:179] * Using Docker driver with root privileges
	I1029 09:10:59.592506  323285 cni.go:84] Creating CNI manager for ""
	I1029 09:10:59.592592  323285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:10:59.592606  323285 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:10:59.592730  323285 start.go:353] cluster config:
	{Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:10:59.594390  323285 out.go:179] * Starting "newest-cni-259430" primary control-plane node in "newest-cni-259430" cluster
	I1029 09:10:59.595763  323285 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:10:59.597231  323285 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:10:59.598574  323285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:10:59.598631  323285 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:10:59.598649  323285 cache.go:59] Caching tarball of preloaded images
	I1029 09:10:59.598672  323285 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:10:59.598768  323285 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:10:59.598779  323285 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:10:59.598919  323285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json ...
	I1029 09:10:59.598949  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json: {Name:mked5dfa4485c424df381c0f3cdc9d7d7ae817f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:59.625501  323285 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:10:59.625521  323285 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:10:59.625543  323285 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:10:59.625570  323285 start.go:360] acquireMachinesLock for newest-cni-259430: {Name:mk9f7a4924e0dc30dd9007c8d213cb8c4076ee8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:10:59.625670  323285 start.go:364] duration metric: took 83.48µs to acquireMachinesLock for "newest-cni-259430"
	I1029 09:10:59.625695  323285 start.go:93] Provisioning new machine with config: &{Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:10:59.625758  323285 start.go:125] createHost starting for "" (driver="docker")
	W1029 09:11:00.144468  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	W1029 09:11:02.642293  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	I1029 09:10:59.627620  323285 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1029 09:10:59.627853  323285 start.go:159] libmachine.API.Create for "newest-cni-259430" (driver="docker")
	I1029 09:10:59.627883  323285 client.go:173] LocalClient.Create starting
	I1029 09:10:59.627960  323285 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem
	I1029 09:10:59.628018  323285 main.go:143] libmachine: Decoding PEM data...
	I1029 09:10:59.628045  323285 main.go:143] libmachine: Parsing certificate...
	I1029 09:10:59.628095  323285 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem
	I1029 09:10:59.628122  323285 main.go:143] libmachine: Decoding PEM data...
	I1029 09:10:59.628138  323285 main.go:143] libmachine: Parsing certificate...
	I1029 09:10:59.628554  323285 cli_runner.go:164] Run: docker network inspect newest-cni-259430 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1029 09:10:59.648491  323285 cli_runner.go:211] docker network inspect newest-cni-259430 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1029 09:10:59.648580  323285 network_create.go:284] running [docker network inspect newest-cni-259430] to gather additional debugging logs...
	I1029 09:10:59.648603  323285 cli_runner.go:164] Run: docker network inspect newest-cni-259430
	W1029 09:10:59.670427  323285 cli_runner.go:211] docker network inspect newest-cni-259430 returned with exit code 1
	I1029 09:10:59.670462  323285 network_create.go:287] error running [docker network inspect newest-cni-259430]: docker network inspect newest-cni-259430: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-259430 not found
	I1029 09:10:59.670476  323285 network_create.go:289] output of [docker network inspect newest-cni-259430]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-259430 not found
	
	** /stderr **
	I1029 09:10:59.670560  323285 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:10:59.691834  323285 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b27c046ec42 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:96:bc:cb:4a:50:f2} reservation:<nil>}
	I1029 09:10:59.692456  323285 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0c15025939eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:79:05:d8:32:73} reservation:<nil>}
	I1029 09:10:59.693254  323285 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5e92a9c19423 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:ec:bb:72:ab:23} reservation:<nil>}
	I1029 09:10:59.693813  323285 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-86d19029abe0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:37:1e:54:39:51} reservation:<nil>}
	I1029 09:10:59.694835  323285 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f10110}
	I1029 09:10:59.694867  323285 network_create.go:124] attempt to create docker network newest-cni-259430 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1029 09:10:59.694938  323285 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-259430 newest-cni-259430
	I1029 09:10:59.769631  323285 network_create.go:108] docker network newest-cni-259430 192.168.85.0/24 created
	I1029 09:10:59.769672  323285 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-259430" container
	I1029 09:10:59.769753  323285 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1029 09:10:59.791054  323285 cli_runner.go:164] Run: docker volume create newest-cni-259430 --label name.minikube.sigs.k8s.io=newest-cni-259430 --label created_by.minikube.sigs.k8s.io=true
	I1029 09:10:59.815466  323285 oci.go:103] Successfully created a docker volume newest-cni-259430
	I1029 09:10:59.815571  323285 cli_runner.go:164] Run: docker run --rm --name newest-cni-259430-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-259430 --entrypoint /usr/bin/test -v newest-cni-259430:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1029 09:11:00.296980  323285 oci.go:107] Successfully prepared a docker volume newest-cni-259430
	I1029 09:11:00.297051  323285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:11:00.297213  323285 kic.go:194] Starting extracting preloaded images to volume ...
	I1029 09:11:00.297322  323285 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-259430:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1029 09:11:04.712172  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	W1029 09:11:07.141484  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	W1029 09:11:09.142117  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	I1029 09:11:05.253096  323285 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-259430:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.955700802s)
	I1029 09:11:05.253129  323285 kic.go:203] duration metric: took 4.955930157s to extract preloaded images to volume ...
	W1029 09:11:05.253214  323285 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1029 09:11:05.253260  323285 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1029 09:11:05.253319  323285 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1029 09:11:05.315847  323285 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-259430 --name newest-cni-259430 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-259430 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-259430 --network newest-cni-259430 --ip 192.168.85.2 --volume newest-cni-259430:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1029 09:11:05.869187  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Running}}
	I1029 09:11:05.893258  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:05.916213  323285 cli_runner.go:164] Run: docker exec newest-cni-259430 stat /var/lib/dpkg/alternatives/iptables
	I1029 09:11:05.978806  323285 oci.go:144] the created container "newest-cni-259430" has a running status.
	I1029 09:11:05.978874  323285 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa...
	I1029 09:11:06.219653  323285 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1029 09:11:06.545636  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:06.569771  323285 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1029 09:11:06.569799  323285 kic_runner.go:114] Args: [docker exec --privileged newest-cni-259430 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1029 09:11:06.628943  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:06.652327  323285 machine.go:94] provisionDockerMachine start ...
	I1029 09:11:06.652444  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:06.681514  323285 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:06.681819  323285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1029 09:11:06.681843  323285 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:11:06.838511  323285 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-259430
	
	I1029 09:11:06.838546  323285 ubuntu.go:182] provisioning hostname "newest-cni-259430"
	I1029 09:11:06.838634  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:06.859040  323285 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:06.859350  323285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1029 09:11:06.859374  323285 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-259430 && echo "newest-cni-259430" | sudo tee /etc/hostname
	I1029 09:11:07.013620  323285 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-259430
	
	I1029 09:11:07.013721  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:07.037196  323285 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:07.037409  323285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1029 09:11:07.037428  323285 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-259430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-259430/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-259430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:11:07.183951  323285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:11:07.184022  323285 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-3727/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-3727/.minikube}
	I1029 09:11:07.184048  323285 ubuntu.go:190] setting up certificates
	I1029 09:11:07.184060  323285 provision.go:84] configureAuth start
	I1029 09:11:07.184115  323285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-259430
	I1029 09:11:07.202495  323285 provision.go:143] copyHostCerts
	I1029 09:11:07.202577  323285 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem, removing ...
	I1029 09:11:07.202592  323285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem
	I1029 09:11:07.202673  323285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem (1078 bytes)
	I1029 09:11:07.202793  323285 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem, removing ...
	I1029 09:11:07.202805  323285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem
	I1029 09:11:07.202849  323285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem (1123 bytes)
	I1029 09:11:07.202933  323285 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem, removing ...
	I1029 09:11:07.202943  323285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem
	I1029 09:11:07.202984  323285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem (1679 bytes)
	I1029 09:11:07.203078  323285 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem org=jenkins.newest-cni-259430 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-259430]
	I1029 09:11:07.395413  323285 provision.go:177] copyRemoteCerts
	I1029 09:11:07.395479  323285 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:11:07.395531  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:07.414871  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:07.517040  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1029 09:11:07.538399  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1029 09:11:07.557923  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:11:07.577096  323285 provision.go:87] duration metric: took 393.019887ms to configureAuth
	I1029 09:11:07.577128  323285 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:11:07.577309  323285 config.go:182] Loaded profile config "newest-cni-259430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:11:07.577427  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:07.597565  323285 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:07.597783  323285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1029 09:11:07.597799  323285 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:11:07.865697  323285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:11:07.865723  323285 machine.go:97] duration metric: took 1.213371631s to provisionDockerMachine
	I1029 09:11:07.865734  323285 client.go:176] duration metric: took 8.237846029s to LocalClient.Create
	I1029 09:11:07.865755  323285 start.go:167] duration metric: took 8.237903765s to libmachine.API.Create "newest-cni-259430"
	I1029 09:11:07.865764  323285 start.go:293] postStartSetup for "newest-cni-259430" (driver="docker")
	I1029 09:11:07.865778  323285 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:11:07.865871  323285 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:11:07.865931  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:07.885321  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:07.991029  323285 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:11:07.994753  323285 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:11:07.994789  323285 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:11:07.994799  323285 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/addons for local assets ...
	I1029 09:11:07.994848  323285 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/files for local assets ...
	I1029 09:11:07.994930  323285 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem -> 72182.pem in /etc/ssl/certs
	I1029 09:11:07.995049  323285 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:11:08.003392  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:11:08.025461  323285 start.go:296] duration metric: took 159.680734ms for postStartSetup
	I1029 09:11:08.025834  323285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-259430
	I1029 09:11:08.047276  323285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json ...
	I1029 09:11:08.047502  323285 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:11:08.047547  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:08.066779  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:08.172233  323285 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:11:08.177185  323285 start.go:128] duration metric: took 8.551412166s to createHost
	I1029 09:11:08.177213  323285 start.go:83] releasing machines lock for "newest-cni-259430", held for 8.551530554s
	I1029 09:11:08.177283  323285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-259430
	I1029 09:11:08.197450  323285 ssh_runner.go:195] Run: cat /version.json
	I1029 09:11:08.197522  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:08.197562  323285 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:11:08.197635  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:08.217275  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:08.217726  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:08.376049  323285 ssh_runner.go:195] Run: systemctl --version
	I1029 09:11:08.383134  323285 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:11:08.422212  323285 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:11:08.427525  323285 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:11:08.427605  323285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:11:08.464435  323285 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1029 09:11:08.464463  323285 start.go:496] detecting cgroup driver to use...
	I1029 09:11:08.464495  323285 detect.go:190] detected "systemd" cgroup driver on host os
	I1029 09:11:08.464546  323285 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:11:08.481110  323285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:11:08.494209  323285 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:11:08.494260  323285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:11:08.511612  323285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:11:08.530553  323285 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:11:08.621566  323285 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:11:08.726166  323285 docker.go:234] disabling docker service ...
	I1029 09:11:08.726224  323285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:11:08.746348  323285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:11:08.760338  323285 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:11:08.858295  323285 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:11:08.943579  323285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:11:08.957200  323285 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:11:08.972520  323285 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:11:08.972577  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:08.983843  323285 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1029 09:11:08.983921  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:08.993498  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:09.003269  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:09.014275  323285 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:11:09.023507  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:09.033766  323285 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:09.051114  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:09.061145  323285 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:11:09.069157  323285 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:11:09.078220  323285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:11:09.173060  323285 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:11:09.290124  323285 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:11:09.290180  323285 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:11:09.294386  323285 start.go:564] Will wait 60s for crictl version
	I1029 09:11:09.294446  323285 ssh_runner.go:195] Run: which crictl
	I1029 09:11:09.298964  323285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:11:09.328014  323285 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:11:09.328085  323285 ssh_runner.go:195] Run: crio --version
	I1029 09:11:09.356771  323285 ssh_runner.go:195] Run: crio --version
	I1029 09:11:09.388520  323285 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:11:09.389795  323285 cli_runner.go:164] Run: docker network inspect newest-cni-259430 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:11:09.408274  323285 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1029 09:11:09.412583  323285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:11:09.424803  323285 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1029 09:11:09.426052  323285 kubeadm.go:884] updating cluster {Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:11:09.426218  323285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:11:09.426300  323285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:11:09.460542  323285 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:11:09.460563  323285 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:11:09.460614  323285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:11:09.487044  323285 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:11:09.487068  323285 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:11:09.487079  323285 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1029 09:11:09.487186  323285 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-259430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:11:09.487269  323285 ssh_runner.go:195] Run: crio config
	I1029 09:11:09.534905  323285 cni.go:84] Creating CNI manager for ""
	I1029 09:11:09.534931  323285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:11:09.534948  323285 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1029 09:11:09.534974  323285 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-259430 NodeName:newest-cni-259430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:11:09.535132  323285 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-259430"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:11:09.535193  323285 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:11:09.543772  323285 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:11:09.543833  323285 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:11:09.552123  323285 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1029 09:11:09.565265  323285 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:11:09.581711  323285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1029 09:11:09.595396  323285 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:11:09.599644  323285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:11:09.610487  323285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:11:09.692291  323285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:11:09.726091  323285 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430 for IP: 192.168.85.2
	I1029 09:11:09.726118  323285 certs.go:195] generating shared ca certs ...
	I1029 09:11:09.726141  323285 certs.go:227] acquiring lock for ca certs: {Name:mk2fcaaead4b0fcf1dc2cfc80d95b3cc12092f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:09.726315  323285 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key
	I1029 09:11:09.726395  323285 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key
	I1029 09:11:09.726414  323285 certs.go:257] generating profile certs ...
	I1029 09:11:09.726496  323285 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.key
	I1029 09:11:09.726515  323285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.crt with IP's: []
	I1029 09:11:09.952951  323285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.crt ...
	I1029 09:11:09.952982  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.crt: {Name:mk4c95155e122c467607b07172eef79936ce7dbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:09.953175  323285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.key ...
	I1029 09:11:09.953188  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.key: {Name:mk823250b94fe9a0154aa07226f6d7d2d7183a44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:09.953268  323285 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key.64cd47c3
	I1029 09:11:09.953284  323285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt.64cd47c3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1029 09:11:10.526658  323285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt.64cd47c3 ...
	I1029 09:11:10.526687  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt.64cd47c3: {Name:mk38b00ad6c7cfbe495c3451bae68542fb6d0084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:10.526859  323285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key.64cd47c3 ...
	I1029 09:11:10.526874  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key.64cd47c3: {Name:mk4e442214473ed9f59e8f778fdf753552f389cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:10.526956  323285 certs.go:382] copying /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt.64cd47c3 -> /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt
	I1029 09:11:10.527047  323285 certs.go:386] copying /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key.64cd47c3 -> /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key
	I1029 09:11:10.527110  323285 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.key
	I1029 09:11:10.527127  323285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.crt with IP's: []
	I1029 09:11:10.693534  323285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.crt ...
	I1029 09:11:10.693566  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.crt: {Name:mk99151503057a9b4735d9a33bf9f994dbe8bcb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:10.693747  323285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.key ...
	I1029 09:11:10.693761  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.key: {Name:mkb37888acb09fb2cfa4458e6f93e0fa1bd40cb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:10.693934  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem (1338 bytes)
	W1029 09:11:10.693972  323285 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218_empty.pem, impossibly tiny 0 bytes
	I1029 09:11:10.693982  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem (1675 bytes)
	I1029 09:11:10.694016  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem (1078 bytes)
	I1029 09:11:10.694037  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:11:10.694058  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem (1679 bytes)
	I1029 09:11:10.694104  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:11:10.694741  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:11:10.714478  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:11:10.733894  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:11:10.752731  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1029 09:11:10.771424  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1029 09:11:10.790531  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:11:10.809745  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:11:10.829770  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:11:10.848820  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:11:10.869632  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem --> /usr/share/ca-certificates/7218.pem (1338 bytes)
	I1029 09:11:10.888449  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /usr/share/ca-certificates/72182.pem (1708 bytes)
	I1029 09:11:10.906606  323285 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:11:10.920157  323285 ssh_runner.go:195] Run: openssl version
	I1029 09:11:10.926421  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:11:10.935727  323285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:11:10.940055  323285 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:11:10.940117  323285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:11:10.975298  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:11:10.984671  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7218.pem && ln -fs /usr/share/ca-certificates/7218.pem /etc/ssl/certs/7218.pem"
	I1029 09:11:10.994016  323285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7218.pem
	I1029 09:11:10.998049  323285 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:26 /usr/share/ca-certificates/7218.pem
	I1029 09:11:10.998109  323285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7218.pem
	I1029 09:11:11.032768  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7218.pem /etc/ssl/certs/51391683.0"
	I1029 09:11:11.042076  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72182.pem && ln -fs /usr/share/ca-certificates/72182.pem /etc/ssl/certs/72182.pem"
	I1029 09:11:11.051249  323285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72182.pem
	I1029 09:11:11.055496  323285 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:26 /usr/share/ca-certificates/72182.pem
	I1029 09:11:11.055557  323285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72182.pem
	I1029 09:11:11.090597  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72182.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:11:11.099729  323285 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:11:11.103802  323285 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1029 09:11:11.103864  323285 kubeadm.go:401] StartCluster: {Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:11:11.103946  323285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:11:11.104033  323285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:11:11.131290  323285 cri.go:89] found id: ""
	I1029 09:11:11.131346  323285 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:11:11.140423  323285 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1029 09:11:11.148741  323285 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1029 09:11:11.148798  323285 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1029 09:11:11.156810  323285 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1029 09:11:11.156826  323285 kubeadm.go:158] found existing configuration files:
	
	I1029 09:11:11.156874  323285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1029 09:11:11.164570  323285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1029 09:11:11.164623  323285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1029 09:11:11.172197  323285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1029 09:11:11.180475  323285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1029 09:11:11.180538  323285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1029 09:11:11.188729  323285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1029 09:11:11.197081  323285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1029 09:11:11.197134  323285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1029 09:11:11.205164  323285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1029 09:11:11.213757  323285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1029 09:11:11.213834  323285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1029 09:11:11.222560  323285 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1029 09:11:11.268456  323285 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1029 09:11:11.268507  323285 kubeadm.go:319] [preflight] Running pre-flight checks
	I1029 09:11:11.290199  323285 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1029 09:11:11.290297  323285 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1029 09:11:11.290361  323285 kubeadm.go:319] OS: Linux
	I1029 09:11:11.290441  323285 kubeadm.go:319] CGROUPS_CPU: enabled
	I1029 09:11:11.290490  323285 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1029 09:11:11.290536  323285 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1029 09:11:11.290625  323285 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1029 09:11:11.290702  323285 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1029 09:11:11.290774  323285 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1029 09:11:11.290840  323285 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1029 09:11:11.290910  323285 kubeadm.go:319] CGROUPS_IO: enabled
	I1029 09:11:11.353151  323285 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1029 09:11:11.353280  323285 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1029 09:11:11.353455  323285 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1029 09:11:11.361607  323285 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1029 09:11:11.641711  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	W1029 09:11:14.140814  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	I1029 09:11:11.363970  323285 out.go:252]   - Generating certificates and keys ...
	I1029 09:11:11.364100  323285 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1029 09:11:11.364205  323285 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1029 09:11:11.568728  323285 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1029 09:11:11.698854  323285 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1029 09:11:12.039747  323285 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1029 09:11:12.129625  323285 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1029 09:11:12.340599  323285 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1029 09:11:12.340797  323285 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-259430] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1029 09:11:12.447881  323285 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1029 09:11:12.448051  323285 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-259430] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1029 09:11:12.809139  323285 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1029 09:11:13.118618  323285 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 09:11:13.421858  323285 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 09:11:13.421937  323285 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 09:11:13.838287  323285 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 09:11:13.908409  323285 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1029 09:11:13.966840  323285 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 09:11:14.294658  323285 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 09:11:14.520651  323285 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 09:11:14.521473  323285 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 09:11:14.525440  323285 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 09:11:16.641154  317625 pod_ready.go:94] pod "coredns-66bc5c9577-qtsxl" is "Ready"
	I1029 09:11:16.641182  317625 pod_ready.go:86] duration metric: took 32.006267628s for pod "coredns-66bc5c9577-qtsxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.644109  317625 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.649400  317625 pod_ready.go:94] pod "etcd-default-k8s-diff-port-017274" is "Ready"
	I1029 09:11:16.649427  317625 pod_ready.go:86] duration metric: took 5.291908ms for pod "etcd-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.651669  317625 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.657129  317625 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-017274" is "Ready"
	I1029 09:11:16.657156  317625 pod_ready.go:86] duration metric: took 5.462345ms for pod "kube-apiserver-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.659534  317625 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.839252  317625 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-017274" is "Ready"
	I1029 09:11:16.839288  317625 pod_ready.go:86] duration metric: took 179.72875ms for pod "kube-controller-manager-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:17.038505  317625 pod_ready.go:83] waiting for pod "kube-proxy-82xcl" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:17.439109  317625 pod_ready.go:94] pod "kube-proxy-82xcl" is "Ready"
	I1029 09:11:17.439143  317625 pod_ready.go:86] duration metric: took 400.60463ms for pod "kube-proxy-82xcl" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:17.638686  317625 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:18.038057  317625 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-017274" is "Ready"
	I1029 09:11:18.038087  317625 pod_ready.go:86] duration metric: took 399.368296ms for pod "kube-scheduler-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:18.038104  317625 pod_ready.go:40] duration metric: took 33.407465789s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:11:18.083317  317625 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:11:18.085224  317625 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-017274" cluster and "default" namespace by default
	I1029 09:11:14.527215  323285 out.go:252]   - Booting up control plane ...
	I1029 09:11:14.527330  323285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 09:11:14.528001  323285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 09:11:14.529019  323285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 09:11:14.543280  323285 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 09:11:14.543401  323285 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1029 09:11:14.550630  323285 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1029 09:11:14.550841  323285 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 09:11:14.550884  323285 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 09:11:14.650739  323285 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1029 09:11:14.650905  323285 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1029 09:11:15.652524  323285 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001987708s
	I1029 09:11:15.655579  323285 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1029 09:11:15.655710  323285 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1029 09:11:15.655837  323285 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1029 09:11:15.655956  323285 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1029 09:11:16.826389  323285 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.170616379s
	I1029 09:11:17.564867  323285 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.909192382s
	I1029 09:11:19.659250  323285 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.003481114s
	I1029 09:11:19.671798  323285 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1029 09:11:19.684260  323285 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1029 09:11:19.699471  323285 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1029 09:11:19.699763  323285 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-259430 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1029 09:11:19.710393  323285 kubeadm.go:319] [bootstrap-token] Using token: etunao.909gsmlonyfps6an
	I1029 09:11:19.712233  323285 out.go:252]   - Configuring RBAC rules ...
	I1029 09:11:19.712362  323285 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1029 09:11:19.717094  323285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1029 09:11:19.726179  323285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1029 09:11:19.730162  323285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1029 09:11:19.733946  323285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1029 09:11:19.737821  323285 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1029 09:11:20.066141  323285 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1029 09:11:20.488826  323285 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1029 09:11:21.065711  323285 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1029 09:11:21.066649  323285 kubeadm.go:319] 
	I1029 09:11:21.066715  323285 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1029 09:11:21.066724  323285 kubeadm.go:319] 
	I1029 09:11:21.066789  323285 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1029 09:11:21.066796  323285 kubeadm.go:319] 
	I1029 09:11:21.066849  323285 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1029 09:11:21.066954  323285 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1029 09:11:21.067064  323285 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1029 09:11:21.067084  323285 kubeadm.go:319] 
	I1029 09:11:21.067165  323285 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1029 09:11:21.067184  323285 kubeadm.go:319] 
	I1029 09:11:21.067246  323285 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1029 09:11:21.067257  323285 kubeadm.go:319] 
	I1029 09:11:21.067324  323285 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1029 09:11:21.067491  323285 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1029 09:11:21.067595  323285 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1029 09:11:21.067606  323285 kubeadm.go:319] 
	I1029 09:11:21.067731  323285 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1029 09:11:21.067854  323285 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1029 09:11:21.067869  323285 kubeadm.go:319] 
	I1029 09:11:21.068015  323285 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token etunao.909gsmlonyfps6an \
	I1029 09:11:21.068175  323285 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac3e7481983f892dc0d8a54808eeb48169ef741e11f757d145550a40a55b8d23 \
	I1029 09:11:21.068206  323285 kubeadm.go:319] 	--control-plane 
	I1029 09:11:21.068228  323285 kubeadm.go:319] 
	I1029 09:11:21.068341  323285 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1029 09:11:21.068355  323285 kubeadm.go:319] 
	I1029 09:11:21.068471  323285 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token etunao.909gsmlonyfps6an \
	I1029 09:11:21.068560  323285 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac3e7481983f892dc0d8a54808eeb48169ef741e11f757d145550a40a55b8d23 
	I1029 09:11:21.072046  323285 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1029 09:11:21.072153  323285 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1029 09:11:21.072178  323285 cni.go:84] Creating CNI manager for ""
	I1029 09:11:21.072201  323285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:11:21.075063  323285 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1029 09:11:21.076333  323285 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1029 09:11:21.080941  323285 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1029 09:11:21.080968  323285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1029 09:11:21.097427  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1029 09:11:21.329871  323285 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 09:11:21.329963  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:21.330018  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-259430 minikube.k8s.io/updated_at=2025_10_29T09_11_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac minikube.k8s.io/name=newest-cni-259430 minikube.k8s.io/primary=true
	I1029 09:11:21.340318  323285 ops.go:34] apiserver oom_adj: -16
	I1029 09:11:21.427541  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:21.927714  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:22.428593  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:22.928549  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:23.427854  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:23.927662  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:24.427970  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:24.927865  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:25.427901  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:25.501188  323285 kubeadm.go:1114] duration metric: took 4.171293414s to wait for elevateKubeSystemPrivileges
	I1029 09:11:25.501228  323285 kubeadm.go:403] duration metric: took 14.397367402s to StartCluster
	I1029 09:11:25.501250  323285 settings.go:142] acquiring lock: {Name:mk07eebd81bddcab3dc3d429be8b09770a1732f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:25.501330  323285 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:11:25.502295  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:25.502553  323285 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:11:25.502565  323285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1029 09:11:25.502588  323285 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:11:25.502688  323285 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-259430"
	I1029 09:11:25.502719  323285 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-259430"
	I1029 09:11:25.502730  323285 addons.go:70] Setting default-storageclass=true in profile "newest-cni-259430"
	I1029 09:11:25.502755  323285 host.go:66] Checking if "newest-cni-259430" exists ...
	I1029 09:11:25.502770  323285 config.go:182] Loaded profile config "newest-cni-259430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:11:25.502782  323285 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-259430"
	I1029 09:11:25.503251  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:25.503349  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:25.504494  323285 out.go:179] * Verifying Kubernetes components...
	I1029 09:11:25.505968  323285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:11:25.527485  323285 addons.go:239] Setting addon default-storageclass=true in "newest-cni-259430"
	I1029 09:11:25.527539  323285 host.go:66] Checking if "newest-cni-259430" exists ...
	I1029 09:11:25.527972  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:25.530438  323285 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:11:25.531693  323285 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:11:25.531717  323285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:11:25.531789  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:25.560504  323285 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:11:25.560528  323285 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:11:25.560591  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:25.562044  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:25.586080  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:25.599802  323285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1029 09:11:25.657575  323285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:11:25.699467  323285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:11:25.704238  323285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:11:25.813033  323285 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1029 09:11:25.813984  323285 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:11:25.814061  323285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:11:26.026073  323285 api_server.go:72] duration metric: took 523.482951ms to wait for apiserver process to appear ...
	I1029 09:11:26.026104  323285 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:11:26.026125  323285 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:11:26.031704  323285 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:11:26.032574  323285 api_server.go:141] control plane version: v1.34.1
	I1029 09:11:26.032596  323285 api_server.go:131] duration metric: took 6.485257ms to wait for apiserver health ...
	I1029 09:11:26.032604  323285 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:11:26.034592  323285 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1029 09:11:26.035774  323285 addons.go:515] duration metric: took 533.185042ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1029 09:11:26.036425  323285 system_pods.go:59] 8 kube-system pods found
	I1029 09:11:26.036459  323285 system_pods.go:61] "coredns-66bc5c9577-k74f5" [d32eecf7-613f-43fe-87b6-1c56dc6f7837] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1029 09:11:26.036469  323285 system_pods.go:61] "etcd-newest-cni-259430" [21bef91b-1e23-4c0b-836a-7d38dbcd158d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:11:26.036477  323285 system_pods.go:61] "kindnet-4555c" [e9503ed8-3583-471b-8ed2-cb19fa55932f] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 09:11:26.036483  323285 system_pods.go:61] "kube-apiserver-newest-cni-259430" [e2aa2d83-bd57-4b42-9f74-cc369442fb48] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:11:26.036489  323285 system_pods.go:61] "kube-controller-manager-newest-cni-259430" [c8b1f927-8450-4b3d-8380-0d74388f7b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:11:26.036493  323285 system_pods.go:61] "kube-proxy-md8mn" [5b216c8f-e72c-44bd-ac4a-4f07213f90bb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:11:26.036499  323285 system_pods.go:61] "kube-scheduler-newest-cni-259430" [6dffb3f4-a5a2-456f-bfe4-34c2a0916645] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:11:26.036510  323285 system_pods.go:61] "storage-provisioner" [b614d976-a2b2-4dff-9276-58ac33de3f70] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1029 09:11:26.036517  323285 system_pods.go:74] duration metric: took 3.906841ms to wait for pod list to return data ...
	I1029 09:11:26.036528  323285 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:11:26.038941  323285 default_sa.go:45] found service account: "default"
	I1029 09:11:26.038970  323285 default_sa.go:55] duration metric: took 2.434992ms for default service account to be created ...
	I1029 09:11:26.038985  323285 kubeadm.go:587] duration metric: took 536.401056ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:11:26.039017  323285 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:11:26.041827  323285 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 09:11:26.041856  323285 node_conditions.go:123] node cpu capacity is 8
	I1029 09:11:26.041871  323285 node_conditions.go:105] duration metric: took 2.848114ms to run NodePressure ...
	I1029 09:11:26.041886  323285 start.go:242] waiting for startup goroutines ...
	I1029 09:11:26.317778  323285 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-259430" context rescaled to 1 replicas
	I1029 09:11:26.317823  323285 start.go:247] waiting for cluster config update ...
	I1029 09:11:26.317834  323285 start.go:256] writing updated cluster config ...
	I1029 09:11:26.318152  323285 ssh_runner.go:195] Run: rm -f paused
	I1029 09:11:26.372618  323285 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:11:26.375584  323285 out.go:179] * Done! kubectl is now configured to use "newest-cni-259430" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.96109101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.963950384Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=94ab5e2d-10c8-4536-86d6-9e915f23e806 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.964565434Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=2da9721f-2096-4911-bf9b-8d7c6727a251 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.965833701Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.966560684Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.966647389Z" level=info msg="Ran pod sandbox ceed384f408a617e6c6e8cc022c917a91fbb0f19266d6ea875c0239cc3d90b27 with infra container: kube-system/kube-proxy-md8mn/POD" id=94ab5e2d-10c8-4536-86d6-9e915f23e806 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.967445531Z" level=info msg="Ran pod sandbox 7db68c0f1af6b5e55b98315700d3221a4d8fef4cb8153aed1f922f2329d20bcf with infra container: kube-system/kindnet-4555c/POD" id=2da9721f-2096-4911-bf9b-8d7c6727a251 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.96812517Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=dd048027-c71b-41b3-9631-feaca665a7c7 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.968505823Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9af916ed-e654-4001-a9bc-0880bb339c20 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.969166458Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=954621dd-02c3-43c6-b493-32c68db118b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.969548092Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ec5479d0-0090-4e41-954e-1d0179d82bf5 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.973546743Z" level=info msg="Creating container: kube-system/kube-proxy-md8mn/kube-proxy" id=00ea25f7-3364-47d7-ac40-517719dd3992 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.97368069Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.978916506Z" level=info msg="Creating container: kube-system/kindnet-4555c/kindnet-cni" id=8d68a4dd-17ba-4d60-ba63-2ee2db07b9db name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.979053065Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.979068977Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.980115484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.984756755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:25 newest-cni-259430 crio[775]: time="2025-10-29T09:11:25.985344213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:26 newest-cni-259430 crio[775]: time="2025-10-29T09:11:26.012923382Z" level=info msg="Created container f74f7ddb1af8a5f34b55af80f8683937f6de48f3a5e6cff52d09e67c242201e6: kube-system/kindnet-4555c/kindnet-cni" id=8d68a4dd-17ba-4d60-ba63-2ee2db07b9db name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:26 newest-cni-259430 crio[775]: time="2025-10-29T09:11:26.013778801Z" level=info msg="Starting container: f74f7ddb1af8a5f34b55af80f8683937f6de48f3a5e6cff52d09e67c242201e6" id=7e711b02-3268-49f1-a539-922e3613bc1b name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:11:26 newest-cni-259430 crio[775]: time="2025-10-29T09:11:26.016036101Z" level=info msg="Created container 20df6bce6606db825acfac2be59c80317fb9a8451bd0a92fd2eb040b17ec6b59: kube-system/kube-proxy-md8mn/kube-proxy" id=00ea25f7-3364-47d7-ac40-517719dd3992 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:26 newest-cni-259430 crio[775]: time="2025-10-29T09:11:26.016220406Z" level=info msg="Started container" PID=1599 containerID=f74f7ddb1af8a5f34b55af80f8683937f6de48f3a5e6cff52d09e67c242201e6 description=kube-system/kindnet-4555c/kindnet-cni id=7e711b02-3268-49f1-a539-922e3613bc1b name=/runtime.v1.RuntimeService/StartContainer sandboxID=7db68c0f1af6b5e55b98315700d3221a4d8fef4cb8153aed1f922f2329d20bcf
	Oct 29 09:11:26 newest-cni-259430 crio[775]: time="2025-10-29T09:11:26.016707121Z" level=info msg="Starting container: 20df6bce6606db825acfac2be59c80317fb9a8451bd0a92fd2eb040b17ec6b59" id=751c8032-27fa-4ea3-a526-78782e885c20 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:11:26 newest-cni-259430 crio[775]: time="2025-10-29T09:11:26.020053491Z" level=info msg="Started container" PID=1598 containerID=20df6bce6606db825acfac2be59c80317fb9a8451bd0a92fd2eb040b17ec6b59 description=kube-system/kube-proxy-md8mn/kube-proxy id=751c8032-27fa-4ea3-a526-78782e885c20 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ceed384f408a617e6c6e8cc022c917a91fbb0f19266d6ea875c0239cc3d90b27
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f74f7ddb1af8a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   7db68c0f1af6b       kindnet-4555c                               kube-system
	20df6bce6606d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   ceed384f408a6       kube-proxy-md8mn                            kube-system
	63f1578a38b7d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   11 seconds ago      Running             etcd                      0                   5280d1a848927       etcd-newest-cni-259430                      kube-system
	243adada9da2b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   11 seconds ago      Running             kube-controller-manager   0                   d787516e7f09a       kube-controller-manager-newest-cni-259430   kube-system
	2f11bf8f3f7f6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   11 seconds ago      Running             kube-scheduler            0                   956dbbf5b3317       kube-scheduler-newest-cni-259430            kube-system
	1a50c3fd3df1b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   11 seconds ago      Running             kube-apiserver            0                   1f7c0336b1e1e       kube-apiserver-newest-cni-259430            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-259430
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-259430
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=newest-cni-259430
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_11_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:11:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-259430
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:11:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:11:20 +0000   Wed, 29 Oct 2025 09:11:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:11:20 +0000   Wed, 29 Oct 2025 09:11:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:11:20 +0000   Wed, 29 Oct 2025 09:11:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 29 Oct 2025 09:11:20 +0000   Wed, 29 Oct 2025 09:11:15 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-259430
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                b0b59dc6-8cfb-44ff-8492-2c787c88523a
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-259430                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-4555c                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-259430             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-259430    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-md8mn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-259430             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 12s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node newest-cni-259430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node newest-cni-259430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node newest-cni-259430 status is now: NodeHasSufficientPID
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s                 kubelet          Node newest-cni-259430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet          Node newest-cni-259430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet          Node newest-cni-259430 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-259430 event: Registered Node newest-cni-259430 in Controller
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[  +7.860471] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea d1 bc 55 fa d3 08 06
	[  +0.057230] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[  +7.379065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 86 de 12 5c b8 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[Oct29 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 3e ab f0 10 3c 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 16 ea 00 8c 1d 08 06
	[  +4.650960] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 a3 db 56 0e bd 08 06
	[  +0.000357] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[ +10.158654] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	[Oct29 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 62 44 12 a2 6e 08 06
	[  +0.000472] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	
	
	==> etcd [63f1578a38b7d9ba054b60ff0fda7c78101101e822ed282116b618d1246fa20c] <==
	{"level":"warn","ts":"2025-10-29T09:11:16.874356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:16.883136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:16.890985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:16.898570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:16.905146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:16.912222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:16.918872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:16.925813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:16.933835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:16.943116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:16.950499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:16.957508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:16.964548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:16.972128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:16.979355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:16.986587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:16.994500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:17.002296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:17.010727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:17.017519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:17.023972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:17.049456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:17.056559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:17.064405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:17.114196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42714","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:11:27 up 53 min,  0 user,  load average: 4.76, 4.22, 2.70
	Linux newest-cni-259430 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f74f7ddb1af8a5f34b55af80f8683937f6de48f3a5e6cff52d09e67c242201e6] <==
	I1029 09:11:26.193827       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:11:26.286284       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:11:26.286474       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:11:26.286494       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:11:26.286529       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:11:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:11:26.490465       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:11:26.490493       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:11:26.490510       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:11:26.490656       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:11:26.890690       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:11:26.890720       1 metrics.go:72] Registering metrics
	I1029 09:11:26.890782       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [1a50c3fd3df1b90fa47b075fc1e4da75317ecdf8de9a3ec2734dcee217ea0a9b] <==
	I1029 09:11:17.594553       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1029 09:11:17.596261       1 controller.go:667] quota admission added evaluator for: namespaces
	E1029 09:11:17.597105       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1029 09:11:17.598258       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1029 09:11:17.598316       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:11:17.602640       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:11:17.602766       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:11:17.800294       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:11:18.500390       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1029 09:11:18.504564       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1029 09:11:18.504584       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:11:19.074280       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:11:19.123567       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:11:19.204403       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1029 09:11:19.212232       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1029 09:11:19.213643       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:11:19.219789       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:11:19.522753       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:11:20.474197       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:11:20.487634       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1029 09:11:20.496521       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 09:11:25.325877       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:11:25.331914       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:11:25.525815       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:11:25.625881       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [243adada9da2b762cf3f1b66ba8bf6a93a2fa96bc354f7a72e3078866c57d088] <==
	I1029 09:11:24.487774       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:11:24.487791       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:11:24.488199       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-259430" podCIDRs=["10.42.0.0/24"]
	I1029 09:11:24.495381       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1029 09:11:24.502690       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 09:11:24.510236       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 09:11:24.519606       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1029 09:11:24.520847       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1029 09:11:24.522035       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 09:11:24.522058       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:11:24.522087       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 09:11:24.522087       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1029 09:11:24.522126       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1029 09:11:24.522139       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 09:11:24.522179       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1029 09:11:24.522246       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:11:24.522283       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 09:11:24.522293       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:11:24.522342       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1029 09:11:24.522041       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 09:11:24.522463       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1029 09:11:24.523920       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1029 09:11:24.527239       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:11:24.528361       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:11:24.542105       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [20df6bce6606db825acfac2be59c80317fb9a8451bd0a92fd2eb040b17ec6b59] <==
	I1029 09:11:26.059767       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:11:26.135315       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:11:26.235492       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:11:26.235528       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1029 09:11:26.235597       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:11:26.255317       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:11:26.255377       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:11:26.260890       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:11:26.261358       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:11:26.261389       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:11:26.262934       1 config.go:200] "Starting service config controller"
	I1029 09:11:26.262962       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:11:26.263094       1 config.go:309] "Starting node config controller"
	I1029 09:11:26.263109       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:11:26.263117       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:11:26.263168       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:11:26.263188       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:11:26.263207       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:11:26.263213       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:11:26.363217       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:11:26.363278       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:11:26.363323       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2f11bf8f3f7f65e5cdfb664cfc59282aa52349cfd96a9f13007fe4941f6ef4d1] <==
	E1029 09:11:17.564298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 09:11:17.564478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 09:11:17.564483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 09:11:17.564584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 09:11:17.564641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 09:11:17.564679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 09:11:17.564725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 09:11:17.564751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 09:11:17.564846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 09:11:17.564856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 09:11:17.564866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 09:11:17.565067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 09:11:17.565068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 09:11:17.565159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 09:11:17.565229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 09:11:18.419493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 09:11:18.472384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 09:11:18.482811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 09:11:18.520306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 09:11:18.739045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 09:11:18.739759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 09:11:18.842789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 09:11:18.850086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 09:11:18.862815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1029 09:11:21.958432       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:11:20 newest-cni-259430 kubelet[1314]: I1029 09:11:20.608852    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5329731eaa0247f79ed9e9499c0a7c8-etc-ca-certificates\") pod \"kube-controller-manager-newest-cni-259430\" (UID: \"c5329731eaa0247f79ed9e9499c0a7c8\") " pod="kube-system/kube-controller-manager-newest-cni-259430"
	Oct 29 09:11:21 newest-cni-259430 kubelet[1314]: I1029 09:11:21.296176    1314 apiserver.go:52] "Watching apiserver"
	Oct 29 09:11:21 newest-cni-259430 kubelet[1314]: I1029 09:11:21.306736    1314 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 29 09:11:21 newest-cni-259430 kubelet[1314]: I1029 09:11:21.339639    1314 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-259430"
	Oct 29 09:11:21 newest-cni-259430 kubelet[1314]: I1029 09:11:21.339721    1314 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-259430"
	Oct 29 09:11:21 newest-cni-259430 kubelet[1314]: I1029 09:11:21.339855    1314 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-259430"
	Oct 29 09:11:21 newest-cni-259430 kubelet[1314]: E1029 09:11:21.347245    1314 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-259430\" already exists" pod="kube-system/kube-scheduler-newest-cni-259430"
	Oct 29 09:11:21 newest-cni-259430 kubelet[1314]: E1029 09:11:21.348553    1314 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-259430\" already exists" pod="kube-system/kube-apiserver-newest-cni-259430"
	Oct 29 09:11:21 newest-cni-259430 kubelet[1314]: E1029 09:11:21.348869    1314 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-259430\" already exists" pod="kube-system/etcd-newest-cni-259430"
	Oct 29 09:11:21 newest-cni-259430 kubelet[1314]: I1029 09:11:21.374442    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-259430" podStartSLOduration=1.374403241 podStartE2EDuration="1.374403241s" podCreationTimestamp="2025-10-29 09:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:11:21.363785487 +0000 UTC m=+1.133831072" watchObservedRunningTime="2025-10-29 09:11:21.374403241 +0000 UTC m=+1.144448828"
	Oct 29 09:11:21 newest-cni-259430 kubelet[1314]: I1029 09:11:21.387064    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-259430" podStartSLOduration=1.387027376 podStartE2EDuration="1.387027376s" podCreationTimestamp="2025-10-29 09:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:11:21.374681523 +0000 UTC m=+1.144727098" watchObservedRunningTime="2025-10-29 09:11:21.387027376 +0000 UTC m=+1.157072941"
	Oct 29 09:11:21 newest-cni-259430 kubelet[1314]: I1029 09:11:21.387286    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-259430" podStartSLOduration=1.387274831 podStartE2EDuration="1.387274831s" podCreationTimestamp="2025-10-29 09:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:11:21.387172686 +0000 UTC m=+1.157218332" watchObservedRunningTime="2025-10-29 09:11:21.387274831 +0000 UTC m=+1.157320415"
	Oct 29 09:11:21 newest-cni-259430 kubelet[1314]: I1029 09:11:21.413758    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-259430" podStartSLOduration=1.413734899 podStartE2EDuration="1.413734899s" podCreationTimestamp="2025-10-29 09:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:11:21.401372384 +0000 UTC m=+1.171417971" watchObservedRunningTime="2025-10-29 09:11:21.413734899 +0000 UTC m=+1.183780538"
	Oct 29 09:11:24 newest-cni-259430 kubelet[1314]: I1029 09:11:24.551071    1314 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 29 09:11:24 newest-cni-259430 kubelet[1314]: I1029 09:11:24.551816    1314 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 29 09:11:25 newest-cni-259430 kubelet[1314]: I1029 09:11:25.743361    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5b216c8f-e72c-44bd-ac4a-4f07213f90bb-kube-proxy\") pod \"kube-proxy-md8mn\" (UID: \"5b216c8f-e72c-44bd-ac4a-4f07213f90bb\") " pod="kube-system/kube-proxy-md8mn"
	Oct 29 09:11:25 newest-cni-259430 kubelet[1314]: I1029 09:11:25.744159    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b216c8f-e72c-44bd-ac4a-4f07213f90bb-xtables-lock\") pod \"kube-proxy-md8mn\" (UID: \"5b216c8f-e72c-44bd-ac4a-4f07213f90bb\") " pod="kube-system/kube-proxy-md8mn"
	Oct 29 09:11:25 newest-cni-259430 kubelet[1314]: I1029 09:11:25.744306    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc9vx\" (UniqueName: \"kubernetes.io/projected/5b216c8f-e72c-44bd-ac4a-4f07213f90bb-kube-api-access-gc9vx\") pod \"kube-proxy-md8mn\" (UID: \"5b216c8f-e72c-44bd-ac4a-4f07213f90bb\") " pod="kube-system/kube-proxy-md8mn"
	Oct 29 09:11:25 newest-cni-259430 kubelet[1314]: I1029 09:11:25.744527    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9503ed8-3583-471b-8ed2-cb19fa55932f-xtables-lock\") pod \"kindnet-4555c\" (UID: \"e9503ed8-3583-471b-8ed2-cb19fa55932f\") " pod="kube-system/kindnet-4555c"
	Oct 29 09:11:25 newest-cni-259430 kubelet[1314]: I1029 09:11:25.744582    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9503ed8-3583-471b-8ed2-cb19fa55932f-lib-modules\") pod \"kindnet-4555c\" (UID: \"e9503ed8-3583-471b-8ed2-cb19fa55932f\") " pod="kube-system/kindnet-4555c"
	Oct 29 09:11:25 newest-cni-259430 kubelet[1314]: I1029 09:11:25.744606    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxtln\" (UniqueName: \"kubernetes.io/projected/e9503ed8-3583-471b-8ed2-cb19fa55932f-kube-api-access-mxtln\") pod \"kindnet-4555c\" (UID: \"e9503ed8-3583-471b-8ed2-cb19fa55932f\") " pod="kube-system/kindnet-4555c"
	Oct 29 09:11:25 newest-cni-259430 kubelet[1314]: I1029 09:11:25.744665    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b216c8f-e72c-44bd-ac4a-4f07213f90bb-lib-modules\") pod \"kube-proxy-md8mn\" (UID: \"5b216c8f-e72c-44bd-ac4a-4f07213f90bb\") " pod="kube-system/kube-proxy-md8mn"
	Oct 29 09:11:25 newest-cni-259430 kubelet[1314]: I1029 09:11:25.744730    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e9503ed8-3583-471b-8ed2-cb19fa55932f-cni-cfg\") pod \"kindnet-4555c\" (UID: \"e9503ed8-3583-471b-8ed2-cb19fa55932f\") " pod="kube-system/kindnet-4555c"
	Oct 29 09:11:26 newest-cni-259430 kubelet[1314]: I1029 09:11:26.366295    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4555c" podStartSLOduration=1.366271148 podStartE2EDuration="1.366271148s" podCreationTimestamp="2025-10-29 09:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:11:26.366068909 +0000 UTC m=+6.136114493" watchObservedRunningTime="2025-10-29 09:11:26.366271148 +0000 UTC m=+6.136316732"
	Oct 29 09:11:26 newest-cni-259430 kubelet[1314]: I1029 09:11:26.412217    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-md8mn" podStartSLOduration=1.412189121 podStartE2EDuration="1.412189121s" podCreationTimestamp="2025-10-29 09:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 09:11:26.393115714 +0000 UTC m=+6.163161297" watchObservedRunningTime="2025-10-29 09:11:26.412189121 +0000 UTC m=+6.182234716"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-259430 -n newest-cni-259430
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-259430 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-k74f5 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-259430 describe pod coredns-66bc5c9577-k74f5 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-259430 describe pod coredns-66bc5c9577-k74f5 storage-provisioner: exit status 1 (59.098418ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-k74f5" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-259430 describe pod coredns-66bc5c9577-k74f5 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-017274 --alsologtostderr -v=1
E1029 09:11:30.456945    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/calico-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-017274 --alsologtostderr -v=1: exit status 80 (2.12927311s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-017274 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:11:29.839806  330377 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:11:29.840175  330377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:11:29.840191  330377 out.go:374] Setting ErrFile to fd 2...
	I1029 09:11:29.840197  330377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:11:29.840478  330377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:11:29.840776  330377 out.go:368] Setting JSON to false
	I1029 09:11:29.840816  330377 mustload.go:66] Loading cluster: default-k8s-diff-port-017274
	I1029 09:11:29.841272  330377 config.go:182] Loaded profile config "default-k8s-diff-port-017274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:11:29.841746  330377 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-017274 --format={{.State.Status}}
	I1029 09:11:29.861357  330377 host.go:66] Checking if "default-k8s-diff-port-017274" exists ...
	I1029 09:11:29.861713  330377 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:11:29.925446  330377 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-29 09:11:29.914480874 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:11:29.926334  330377 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-017274 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1029 09:11:29.928516  330377 out.go:179] * Pausing node default-k8s-diff-port-017274 ... 
	I1029 09:11:29.931098  330377 host.go:66] Checking if "default-k8s-diff-port-017274" exists ...
	I1029 09:11:29.931401  330377 ssh_runner.go:195] Run: systemctl --version
	I1029 09:11:29.931456  330377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-017274
	I1029 09:11:29.951435  330377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/default-k8s-diff-port-017274/id_rsa Username:docker}
	I1029 09:11:30.052484  330377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:11:30.066275  330377 pause.go:52] kubelet running: true
	I1029 09:11:30.066361  330377 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:11:30.253431  330377 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:11:30.253555  330377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:11:30.322239  330377 cri.go:89] found id: "9f8b022f6197b575996b89e676302e4eb86553b72ca4f45a653082725e761546"
	I1029 09:11:30.322271  330377 cri.go:89] found id: "b307f64c120f3819158edb444be7e97b99be83763b6b415d2244b39fe00046f9"
	I1029 09:11:30.322281  330377 cri.go:89] found id: "622a00b140b2c0609cf8cf6561c828e18ba9000776cbc4c975473747329412e8"
	I1029 09:11:30.322287  330377 cri.go:89] found id: "6f632ec2ab17f16383342be9b493a5028653a719ddcd23d6ebf0bf9ef6d6ada4"
	I1029 09:11:30.322290  330377 cri.go:89] found id: "4ce19f536b3e79a00539eda45389baa388b6e72af2f5f3735054624a5e24cc23"
	I1029 09:11:30.322293  330377 cri.go:89] found id: "7e6fae9cd623cd88656a304b8514161f51b751e23f1918df0f51d122620ec416"
	I1029 09:11:30.322295  330377 cri.go:89] found id: "f86c6058a709440c09ee461898fae3daf70e692c424c5d7e8f093887f7ac3e6e"
	I1029 09:11:30.322298  330377 cri.go:89] found id: "16de8e1e0e29b4272b860675eb3990e121068d5daeaec00a854feb51ab6b59c6"
	I1029 09:11:30.322300  330377 cri.go:89] found id: "bf3d3afb886dcc98b83711dba516b774e5c1d649904cdd75ab5a786f4f65ac21"
	I1029 09:11:30.322307  330377 cri.go:89] found id: "f18669e29ce3bae89e47c4d615e489b67abd011cf8fb575159bfbd7cf320ddf3"
	I1029 09:11:30.322309  330377 cri.go:89] found id: "6a233da0986cdd4b355e6ad5ef2ef59ef1cc2366e325cd151f90f0e07579e1d5"
	I1029 09:11:30.322311  330377 cri.go:89] found id: ""
	I1029 09:11:30.322349  330377 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:11:30.334265  330377 retry.go:31] will retry after 303.090723ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:11:30Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:11:30.637808  330377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:11:30.651712  330377 pause.go:52] kubelet running: false
	I1029 09:11:30.651783  330377 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:11:30.801307  330377 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:11:30.801380  330377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:11:30.869677  330377 cri.go:89] found id: "9f8b022f6197b575996b89e676302e4eb86553b72ca4f45a653082725e761546"
	I1029 09:11:30.869705  330377 cri.go:89] found id: "b307f64c120f3819158edb444be7e97b99be83763b6b415d2244b39fe00046f9"
	I1029 09:11:30.869715  330377 cri.go:89] found id: "622a00b140b2c0609cf8cf6561c828e18ba9000776cbc4c975473747329412e8"
	I1029 09:11:30.869720  330377 cri.go:89] found id: "6f632ec2ab17f16383342be9b493a5028653a719ddcd23d6ebf0bf9ef6d6ada4"
	I1029 09:11:30.869725  330377 cri.go:89] found id: "4ce19f536b3e79a00539eda45389baa388b6e72af2f5f3735054624a5e24cc23"
	I1029 09:11:30.869732  330377 cri.go:89] found id: "7e6fae9cd623cd88656a304b8514161f51b751e23f1918df0f51d122620ec416"
	I1029 09:11:30.869736  330377 cri.go:89] found id: "f86c6058a709440c09ee461898fae3daf70e692c424c5d7e8f093887f7ac3e6e"
	I1029 09:11:30.869741  330377 cri.go:89] found id: "16de8e1e0e29b4272b860675eb3990e121068d5daeaec00a854feb51ab6b59c6"
	I1029 09:11:30.869745  330377 cri.go:89] found id: "bf3d3afb886dcc98b83711dba516b774e5c1d649904cdd75ab5a786f4f65ac21"
	I1029 09:11:30.869770  330377 cri.go:89] found id: "f18669e29ce3bae89e47c4d615e489b67abd011cf8fb575159bfbd7cf320ddf3"
	I1029 09:11:30.869773  330377 cri.go:89] found id: "6a233da0986cdd4b355e6ad5ef2ef59ef1cc2366e325cd151f90f0e07579e1d5"
	I1029 09:11:30.869775  330377 cri.go:89] found id: ""
	I1029 09:11:30.869831  330377 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:11:30.881877  330377 retry.go:31] will retry after 193.072753ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:11:30Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:11:31.075320  330377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:11:31.089184  330377 pause.go:52] kubelet running: false
	I1029 09:11:31.089250  330377 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:11:31.235197  330377 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:11:31.235308  330377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:11:31.306779  330377 cri.go:89] found id: "9f8b022f6197b575996b89e676302e4eb86553b72ca4f45a653082725e761546"
	I1029 09:11:31.306807  330377 cri.go:89] found id: "b307f64c120f3819158edb444be7e97b99be83763b6b415d2244b39fe00046f9"
	I1029 09:11:31.306814  330377 cri.go:89] found id: "622a00b140b2c0609cf8cf6561c828e18ba9000776cbc4c975473747329412e8"
	I1029 09:11:31.306819  330377 cri.go:89] found id: "6f632ec2ab17f16383342be9b493a5028653a719ddcd23d6ebf0bf9ef6d6ada4"
	I1029 09:11:31.306822  330377 cri.go:89] found id: "4ce19f536b3e79a00539eda45389baa388b6e72af2f5f3735054624a5e24cc23"
	I1029 09:11:31.306825  330377 cri.go:89] found id: "7e6fae9cd623cd88656a304b8514161f51b751e23f1918df0f51d122620ec416"
	I1029 09:11:31.306828  330377 cri.go:89] found id: "f86c6058a709440c09ee461898fae3daf70e692c424c5d7e8f093887f7ac3e6e"
	I1029 09:11:31.306831  330377 cri.go:89] found id: "16de8e1e0e29b4272b860675eb3990e121068d5daeaec00a854feb51ab6b59c6"
	I1029 09:11:31.306833  330377 cri.go:89] found id: "bf3d3afb886dcc98b83711dba516b774e5c1d649904cdd75ab5a786f4f65ac21"
	I1029 09:11:31.306839  330377 cri.go:89] found id: "f18669e29ce3bae89e47c4d615e489b67abd011cf8fb575159bfbd7cf320ddf3"
	I1029 09:11:31.306842  330377 cri.go:89] found id: "6a233da0986cdd4b355e6ad5ef2ef59ef1cc2366e325cd151f90f0e07579e1d5"
	I1029 09:11:31.306844  330377 cri.go:89] found id: ""
	I1029 09:11:31.306896  330377 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:11:31.319402  330377 retry.go:31] will retry after 329.845382ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:11:31Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:11:31.650107  330377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:11:31.663561  330377 pause.go:52] kubelet running: false
	I1029 09:11:31.663625  330377 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:11:31.808063  330377 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:11:31.808147  330377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:11:31.877428  330377 cri.go:89] found id: "9f8b022f6197b575996b89e676302e4eb86553b72ca4f45a653082725e761546"
	I1029 09:11:31.877448  330377 cri.go:89] found id: "b307f64c120f3819158edb444be7e97b99be83763b6b415d2244b39fe00046f9"
	I1029 09:11:31.877451  330377 cri.go:89] found id: "622a00b140b2c0609cf8cf6561c828e18ba9000776cbc4c975473747329412e8"
	I1029 09:11:31.877476  330377 cri.go:89] found id: "6f632ec2ab17f16383342be9b493a5028653a719ddcd23d6ebf0bf9ef6d6ada4"
	I1029 09:11:31.877479  330377 cri.go:89] found id: "4ce19f536b3e79a00539eda45389baa388b6e72af2f5f3735054624a5e24cc23"
	I1029 09:11:31.877482  330377 cri.go:89] found id: "7e6fae9cd623cd88656a304b8514161f51b751e23f1918df0f51d122620ec416"
	I1029 09:11:31.877485  330377 cri.go:89] found id: "f86c6058a709440c09ee461898fae3daf70e692c424c5d7e8f093887f7ac3e6e"
	I1029 09:11:31.877487  330377 cri.go:89] found id: "16de8e1e0e29b4272b860675eb3990e121068d5daeaec00a854feb51ab6b59c6"
	I1029 09:11:31.877490  330377 cri.go:89] found id: "bf3d3afb886dcc98b83711dba516b774e5c1d649904cdd75ab5a786f4f65ac21"
	I1029 09:11:31.877498  330377 cri.go:89] found id: "f18669e29ce3bae89e47c4d615e489b67abd011cf8fb575159bfbd7cf320ddf3"
	I1029 09:11:31.877502  330377 cri.go:89] found id: "6a233da0986cdd4b355e6ad5ef2ef59ef1cc2366e325cd151f90f0e07579e1d5"
	I1029 09:11:31.877506  330377 cri.go:89] found id: ""
	I1029 09:11:31.877552  330377 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:11:31.892700  330377 out.go:203] 
	W1029 09:11:31.894128  330377 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:11:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:11:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 09:11:31.894148  330377 out.go:285] * 
	* 
	W1029 09:11:31.898185  330377 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 09:11:31.899648  330377 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-017274 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-017274
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-017274:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb",
	        "Created": "2025-10-29T09:09:32.123718192Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 317827,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:10:34.5663352Z",
	            "FinishedAt": "2025-10-29T09:10:33.60337614Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb/hostname",
	        "HostsPath": "/var/lib/docker/containers/7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb/hosts",
	        "LogPath": "/var/lib/docker/containers/7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb/7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb-json.log",
	        "Name": "/default-k8s-diff-port-017274",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-017274:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-017274",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb",
	                "LowerDir": "/var/lib/docker/overlay2/117a7a2ef77d077fb877fd0c4a60a9815c28a651245a5dc97bd62489d2fb82c5-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/117a7a2ef77d077fb877fd0c4a60a9815c28a651245a5dc97bd62489d2fb82c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/117a7a2ef77d077fb877fd0c4a60a9815c28a651245a5dc97bd62489d2fb82c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/117a7a2ef77d077fb877fd0c4a60a9815c28a651245a5dc97bd62489d2fb82c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-017274",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-017274/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-017274",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-017274",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-017274",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0696f90b06524da7694148a9be743a1d593fe477b6d899eabcb52f64512155b3",
	            "SandboxKey": "/var/run/docker/netns/0696f90b0652",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-017274": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:eb:f5:17:cb:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3eee94d37a532f968084ecba10a40919f575531a63b06a3b1433848fa7502a53",
	                    "EndpointID": "c411a7ebd1b61fa0c71632154653881debf500c761abd6a89c76f4d9207ce35e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-017274",
	                        "7cabc8999167"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-017274 -n default-k8s-diff-port-017274
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-017274 -n default-k8s-diff-port-017274: exit status 2 (328.101584ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-017274 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-017274 logs -n 25: (1.142810751s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-834228 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p embed-certs-834228 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p no-preload-043790 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p no-preload-043790 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-017274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-017274 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-017274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:11 UTC │
	│ image   │ old-k8s-version-096492 image list --format=json                                                                                                                                                                                               │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p old-k8s-version-096492 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ image   │ embed-certs-834228 image list --format=json                                                                                                                                                                                                   │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p embed-certs-834228 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ image   │ no-preload-043790 image list --format=json                                                                                                                                                                                                    │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p no-preload-043790 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p newest-cni-259430 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p embed-certs-834228                                                                                                                                                                                                                         │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p no-preload-043790                                                                                                                                                                                                                          │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p embed-certs-834228                                                                                                                                                                                                                         │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p no-preload-043790                                                                                                                                                                                                                          │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ addons  │ enable metrics-server -p newest-cni-259430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │                     │
	│ stop    │ -p newest-cni-259430 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │                     │
	│ image   │ default-k8s-diff-port-017274 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ pause   │ -p default-k8s-diff-port-017274 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:10:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:10:59.394267  323285 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:10:59.394622  323285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:59.394635  323285 out.go:374] Setting ErrFile to fd 2...
	I1029 09:10:59.394640  323285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:59.394949  323285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:10:59.395669  323285 out.go:368] Setting JSON to false
	I1029 09:10:59.397426  323285 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3207,"bootTime":1761725852,"procs":353,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:10:59.397490  323285 start.go:143] virtualization: kvm guest
	I1029 09:10:59.399709  323285 out.go:179] * [newest-cni-259430] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:10:59.401275  323285 notify.go:221] Checking for updates...
	I1029 09:10:59.401303  323285 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:10:59.402811  323285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:10:59.404227  323285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:59.405575  323285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:10:59.406888  323285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:10:59.408222  323285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:10:59.410015  323285 config.go:182] Loaded profile config "default-k8s-diff-port-017274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410148  323285 config.go:182] Loaded profile config "embed-certs-834228": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410263  323285 config.go:182] Loaded profile config "no-preload-043790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410378  323285 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:10:59.435730  323285 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:10:59.435827  323285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:59.503060  323285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-29 09:10:59.489541208 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:59.503182  323285 docker.go:319] overlay module found
	I1029 09:10:59.505269  323285 out.go:179] * Using the docker driver based on user configuration
	I1029 09:10:59.506723  323285 start.go:309] selected driver: docker
	I1029 09:10:59.506741  323285 start.go:930] validating driver "docker" against <nil>
	I1029 09:10:59.506755  323285 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:10:59.507436  323285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:59.587780  323285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-29 09:10:59.571693978 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:59.588075  323285 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1029 09:10:59.588122  323285 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1029 09:10:59.588720  323285 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:10:59.590863  323285 out.go:179] * Using Docker driver with root privileges
	I1029 09:10:59.592506  323285 cni.go:84] Creating CNI manager for ""
	I1029 09:10:59.592592  323285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:10:59.592606  323285 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:10:59.592730  323285 start.go:353] cluster config:
	{Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:10:59.594390  323285 out.go:179] * Starting "newest-cni-259430" primary control-plane node in "newest-cni-259430" cluster
	I1029 09:10:59.595763  323285 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:10:59.597231  323285 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:10:59.598574  323285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:10:59.598631  323285 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:10:59.598649  323285 cache.go:59] Caching tarball of preloaded images
	I1029 09:10:59.598672  323285 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:10:59.598768  323285 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:10:59.598779  323285 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:10:59.598919  323285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json ...
	I1029 09:10:59.598949  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json: {Name:mked5dfa4485c424df381c0f3cdc9d7d7ae817f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:59.625501  323285 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:10:59.625521  323285 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:10:59.625543  323285 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:10:59.625570  323285 start.go:360] acquireMachinesLock for newest-cni-259430: {Name:mk9f7a4924e0dc30dd9007c8d213cb8c4076ee8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:10:59.625670  323285 start.go:364] duration metric: took 83.48µs to acquireMachinesLock for "newest-cni-259430"
	I1029 09:10:59.625695  323285 start.go:93] Provisioning new machine with config: &{Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:10:59.625758  323285 start.go:125] createHost starting for "" (driver="docker")
	W1029 09:11:00.144468  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	W1029 09:11:02.642293  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	I1029 09:10:59.627620  323285 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1029 09:10:59.627853  323285 start.go:159] libmachine.API.Create for "newest-cni-259430" (driver="docker")
	I1029 09:10:59.627883  323285 client.go:173] LocalClient.Create starting
	I1029 09:10:59.627960  323285 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem
	I1029 09:10:59.628018  323285 main.go:143] libmachine: Decoding PEM data...
	I1029 09:10:59.628045  323285 main.go:143] libmachine: Parsing certificate...
	I1029 09:10:59.628095  323285 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem
	I1029 09:10:59.628122  323285 main.go:143] libmachine: Decoding PEM data...
	I1029 09:10:59.628138  323285 main.go:143] libmachine: Parsing certificate...
	I1029 09:10:59.628554  323285 cli_runner.go:164] Run: docker network inspect newest-cni-259430 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1029 09:10:59.648491  323285 cli_runner.go:211] docker network inspect newest-cni-259430 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1029 09:10:59.648580  323285 network_create.go:284] running [docker network inspect newest-cni-259430] to gather additional debugging logs...
	I1029 09:10:59.648603  323285 cli_runner.go:164] Run: docker network inspect newest-cni-259430
	W1029 09:10:59.670427  323285 cli_runner.go:211] docker network inspect newest-cni-259430 returned with exit code 1
	I1029 09:10:59.670462  323285 network_create.go:287] error running [docker network inspect newest-cni-259430]: docker network inspect newest-cni-259430: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-259430 not found
	I1029 09:10:59.670476  323285 network_create.go:289] output of [docker network inspect newest-cni-259430]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-259430 not found
	
	** /stderr **
	I1029 09:10:59.670560  323285 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:10:59.691834  323285 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b27c046ec42 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:96:bc:cb:4a:50:f2} reservation:<nil>}
	I1029 09:10:59.692456  323285 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0c15025939eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:79:05:d8:32:73} reservation:<nil>}
	I1029 09:10:59.693254  323285 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5e92a9c19423 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:ec:bb:72:ab:23} reservation:<nil>}
	I1029 09:10:59.693813  323285 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-86d19029abe0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:37:1e:54:39:51} reservation:<nil>}
	I1029 09:10:59.694835  323285 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f10110}
	I1029 09:10:59.694867  323285 network_create.go:124] attempt to create docker network newest-cni-259430 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1029 09:10:59.694938  323285 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-259430 newest-cni-259430
	I1029 09:10:59.769631  323285 network_create.go:108] docker network newest-cni-259430 192.168.85.0/24 created
	I1029 09:10:59.769672  323285 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-259430" container
	I1029 09:10:59.769753  323285 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1029 09:10:59.791054  323285 cli_runner.go:164] Run: docker volume create newest-cni-259430 --label name.minikube.sigs.k8s.io=newest-cni-259430 --label created_by.minikube.sigs.k8s.io=true
	I1029 09:10:59.815466  323285 oci.go:103] Successfully created a docker volume newest-cni-259430
	I1029 09:10:59.815571  323285 cli_runner.go:164] Run: docker run --rm --name newest-cni-259430-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-259430 --entrypoint /usr/bin/test -v newest-cni-259430:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1029 09:11:00.296980  323285 oci.go:107] Successfully prepared a docker volume newest-cni-259430
	I1029 09:11:00.297051  323285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:11:00.297213  323285 kic.go:194] Starting extracting preloaded images to volume ...
	I1029 09:11:00.297322  323285 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-259430:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1029 09:11:04.712172  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	W1029 09:11:07.141484  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	W1029 09:11:09.142117  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	I1029 09:11:05.253096  323285 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-259430:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.955700802s)
	I1029 09:11:05.253129  323285 kic.go:203] duration metric: took 4.955930157s to extract preloaded images to volume ...
	W1029 09:11:05.253214  323285 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1029 09:11:05.253260  323285 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1029 09:11:05.253319  323285 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1029 09:11:05.315847  323285 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-259430 --name newest-cni-259430 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-259430 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-259430 --network newest-cni-259430 --ip 192.168.85.2 --volume newest-cni-259430:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1029 09:11:05.869187  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Running}}
	I1029 09:11:05.893258  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:05.916213  323285 cli_runner.go:164] Run: docker exec newest-cni-259430 stat /var/lib/dpkg/alternatives/iptables
	I1029 09:11:05.978806  323285 oci.go:144] the created container "newest-cni-259430" has a running status.
	I1029 09:11:05.978874  323285 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa...
	I1029 09:11:06.219653  323285 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1029 09:11:06.545636  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:06.569771  323285 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1029 09:11:06.569799  323285 kic_runner.go:114] Args: [docker exec --privileged newest-cni-259430 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1029 09:11:06.628943  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:06.652327  323285 machine.go:94] provisionDockerMachine start ...
	I1029 09:11:06.652444  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:06.681514  323285 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:06.681819  323285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1029 09:11:06.681843  323285 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:11:06.838511  323285 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-259430
	
	I1029 09:11:06.838546  323285 ubuntu.go:182] provisioning hostname "newest-cni-259430"
	I1029 09:11:06.838634  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:06.859040  323285 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:06.859350  323285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1029 09:11:06.859374  323285 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-259430 && echo "newest-cni-259430" | sudo tee /etc/hostname
	I1029 09:11:07.013620  323285 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-259430
	
	I1029 09:11:07.013721  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:07.037196  323285 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:07.037409  323285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1029 09:11:07.037428  323285 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-259430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-259430/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-259430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:11:07.183951  323285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:11:07.184022  323285 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-3727/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-3727/.minikube}
	I1029 09:11:07.184048  323285 ubuntu.go:190] setting up certificates
	I1029 09:11:07.184060  323285 provision.go:84] configureAuth start
	I1029 09:11:07.184115  323285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-259430
	I1029 09:11:07.202495  323285 provision.go:143] copyHostCerts
	I1029 09:11:07.202577  323285 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem, removing ...
	I1029 09:11:07.202592  323285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem
	I1029 09:11:07.202673  323285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem (1078 bytes)
	I1029 09:11:07.202793  323285 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem, removing ...
	I1029 09:11:07.202805  323285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem
	I1029 09:11:07.202849  323285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem (1123 bytes)
	I1029 09:11:07.202933  323285 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem, removing ...
	I1029 09:11:07.202943  323285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem
	I1029 09:11:07.202984  323285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem (1679 bytes)
	I1029 09:11:07.203078  323285 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem org=jenkins.newest-cni-259430 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-259430]
	I1029 09:11:07.395413  323285 provision.go:177] copyRemoteCerts
	I1029 09:11:07.395479  323285 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:11:07.395531  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:07.414871  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:07.517040  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1029 09:11:07.538399  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1029 09:11:07.557923  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:11:07.577096  323285 provision.go:87] duration metric: took 393.019887ms to configureAuth
	I1029 09:11:07.577128  323285 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:11:07.577309  323285 config.go:182] Loaded profile config "newest-cni-259430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:11:07.577427  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:07.597565  323285 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:07.597783  323285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1029 09:11:07.597799  323285 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:11:07.865697  323285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:11:07.865723  323285 machine.go:97] duration metric: took 1.213371631s to provisionDockerMachine
	I1029 09:11:07.865734  323285 client.go:176] duration metric: took 8.237846029s to LocalClient.Create
	I1029 09:11:07.865755  323285 start.go:167] duration metric: took 8.237903765s to libmachine.API.Create "newest-cni-259430"
	I1029 09:11:07.865764  323285 start.go:293] postStartSetup for "newest-cni-259430" (driver="docker")
	I1029 09:11:07.865778  323285 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:11:07.865871  323285 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:11:07.865931  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:07.885321  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:07.991029  323285 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:11:07.994753  323285 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:11:07.994789  323285 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:11:07.994799  323285 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/addons for local assets ...
	I1029 09:11:07.994848  323285 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/files for local assets ...
	I1029 09:11:07.994930  323285 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem -> 72182.pem in /etc/ssl/certs
	I1029 09:11:07.995049  323285 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:11:08.003392  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:11:08.025461  323285 start.go:296] duration metric: took 159.680734ms for postStartSetup
	I1029 09:11:08.025834  323285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-259430
	I1029 09:11:08.047276  323285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json ...
	I1029 09:11:08.047502  323285 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:11:08.047547  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:08.066779  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:08.172233  323285 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:11:08.177185  323285 start.go:128] duration metric: took 8.551412166s to createHost
	I1029 09:11:08.177213  323285 start.go:83] releasing machines lock for "newest-cni-259430", held for 8.551530554s
	I1029 09:11:08.177283  323285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-259430
	I1029 09:11:08.197450  323285 ssh_runner.go:195] Run: cat /version.json
	I1029 09:11:08.197522  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:08.197562  323285 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:11:08.197635  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:08.217275  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:08.217726  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:08.376049  323285 ssh_runner.go:195] Run: systemctl --version
	I1029 09:11:08.383134  323285 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:11:08.422212  323285 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:11:08.427525  323285 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:11:08.427605  323285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:11:08.464435  323285 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1029 09:11:08.464463  323285 start.go:496] detecting cgroup driver to use...
	I1029 09:11:08.464495  323285 detect.go:190] detected "systemd" cgroup driver on host os
	I1029 09:11:08.464546  323285 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:11:08.481110  323285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:11:08.494209  323285 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:11:08.494260  323285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:11:08.511612  323285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:11:08.530553  323285 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:11:08.621566  323285 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:11:08.726166  323285 docker.go:234] disabling docker service ...
	I1029 09:11:08.726224  323285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:11:08.746348  323285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:11:08.760338  323285 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:11:08.858295  323285 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:11:08.943579  323285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:11:08.957200  323285 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:11:08.972520  323285 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:11:08.972577  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:08.983843  323285 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1029 09:11:08.983921  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:08.993498  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:09.003269  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:09.014275  323285 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:11:09.023507  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:09.033766  323285 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:09.051114  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:09.061145  323285 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:11:09.069157  323285 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:11:09.078220  323285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:11:09.173060  323285 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:11:09.290124  323285 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:11:09.290180  323285 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:11:09.294386  323285 start.go:564] Will wait 60s for crictl version
	I1029 09:11:09.294446  323285 ssh_runner.go:195] Run: which crictl
	I1029 09:11:09.298964  323285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:11:09.328014  323285 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:11:09.328085  323285 ssh_runner.go:195] Run: crio --version
	I1029 09:11:09.356771  323285 ssh_runner.go:195] Run: crio --version
	I1029 09:11:09.388520  323285 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:11:09.389795  323285 cli_runner.go:164] Run: docker network inspect newest-cni-259430 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:11:09.408274  323285 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1029 09:11:09.412583  323285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:11:09.424803  323285 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1029 09:11:09.426052  323285 kubeadm.go:884] updating cluster {Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:11:09.426218  323285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:11:09.426300  323285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:11:09.460542  323285 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:11:09.460563  323285 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:11:09.460614  323285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:11:09.487044  323285 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:11:09.487068  323285 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:11:09.487079  323285 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1029 09:11:09.487186  323285 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-259430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:11:09.487269  323285 ssh_runner.go:195] Run: crio config
	I1029 09:11:09.534905  323285 cni.go:84] Creating CNI manager for ""
	I1029 09:11:09.534931  323285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:11:09.534948  323285 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1029 09:11:09.534974  323285 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-259430 NodeName:newest-cni-259430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:11:09.535132  323285 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-259430"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:11:09.535193  323285 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:11:09.543772  323285 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:11:09.543833  323285 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:11:09.552123  323285 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1029 09:11:09.565265  323285 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:11:09.581711  323285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1029 09:11:09.595396  323285 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:11:09.599644  323285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:11:09.610487  323285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:11:09.692291  323285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:11:09.726091  323285 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430 for IP: 192.168.85.2
	I1029 09:11:09.726118  323285 certs.go:195] generating shared ca certs ...
	I1029 09:11:09.726141  323285 certs.go:227] acquiring lock for ca certs: {Name:mk2fcaaead4b0fcf1dc2cfc80d95b3cc12092f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:09.726315  323285 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key
	I1029 09:11:09.726395  323285 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key
	I1029 09:11:09.726414  323285 certs.go:257] generating profile certs ...
	I1029 09:11:09.726496  323285 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.key
	I1029 09:11:09.726515  323285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.crt with IP's: []
	I1029 09:11:09.952951  323285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.crt ...
	I1029 09:11:09.952982  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.crt: {Name:mk4c95155e122c467607b07172eef79936ce7dbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:09.953175  323285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.key ...
	I1029 09:11:09.953188  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.key: {Name:mk823250b94fe9a0154aa07226f6d7d2d7183a44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:09.953268  323285 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key.64cd47c3
	I1029 09:11:09.953284  323285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt.64cd47c3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1029 09:11:10.526658  323285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt.64cd47c3 ...
	I1029 09:11:10.526687  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt.64cd47c3: {Name:mk38b00ad6c7cfbe495c3451bae68542fb6d0084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:10.526859  323285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key.64cd47c3 ...
	I1029 09:11:10.526874  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key.64cd47c3: {Name:mk4e442214473ed9f59e8f778fdf753552f389cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:10.526956  323285 certs.go:382] copying /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt.64cd47c3 -> /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt
	I1029 09:11:10.527047  323285 certs.go:386] copying /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key.64cd47c3 -> /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key
	I1029 09:11:10.527110  323285 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.key
	I1029 09:11:10.527127  323285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.crt with IP's: []
	I1029 09:11:10.693534  323285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.crt ...
	I1029 09:11:10.693566  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.crt: {Name:mk99151503057a9b4735d9a33bf9f994dbe8bcb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:10.693747  323285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.key ...
	I1029 09:11:10.693761  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.key: {Name:mkb37888acb09fb2cfa4458e6f93e0fa1bd40cb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:10.693934  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem (1338 bytes)
	W1029 09:11:10.693972  323285 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218_empty.pem, impossibly tiny 0 bytes
	I1029 09:11:10.693982  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem (1675 bytes)
	I1029 09:11:10.694016  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem (1078 bytes)
	I1029 09:11:10.694037  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:11:10.694058  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem (1679 bytes)
	I1029 09:11:10.694104  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:11:10.694741  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:11:10.714478  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:11:10.733894  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:11:10.752731  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1029 09:11:10.771424  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1029 09:11:10.790531  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:11:10.809745  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:11:10.829770  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:11:10.848820  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:11:10.869632  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem --> /usr/share/ca-certificates/7218.pem (1338 bytes)
	I1029 09:11:10.888449  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /usr/share/ca-certificates/72182.pem (1708 bytes)
	I1029 09:11:10.906606  323285 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:11:10.920157  323285 ssh_runner.go:195] Run: openssl version
	I1029 09:11:10.926421  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:11:10.935727  323285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:11:10.940055  323285 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:11:10.940117  323285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:11:10.975298  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:11:10.984671  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7218.pem && ln -fs /usr/share/ca-certificates/7218.pem /etc/ssl/certs/7218.pem"
	I1029 09:11:10.994016  323285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7218.pem
	I1029 09:11:10.998049  323285 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:26 /usr/share/ca-certificates/7218.pem
	I1029 09:11:10.998109  323285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7218.pem
	I1029 09:11:11.032768  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7218.pem /etc/ssl/certs/51391683.0"
	I1029 09:11:11.042076  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72182.pem && ln -fs /usr/share/ca-certificates/72182.pem /etc/ssl/certs/72182.pem"
	I1029 09:11:11.051249  323285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72182.pem
	I1029 09:11:11.055496  323285 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:26 /usr/share/ca-certificates/72182.pem
	I1029 09:11:11.055557  323285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72182.pem
	I1029 09:11:11.090597  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72182.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:11:11.099729  323285 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:11:11.103802  323285 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1029 09:11:11.103864  323285 kubeadm.go:401] StartCluster: {Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:11:11.103946  323285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:11:11.104033  323285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:11:11.131290  323285 cri.go:89] found id: ""
	I1029 09:11:11.131346  323285 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:11:11.140423  323285 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1029 09:11:11.148741  323285 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1029 09:11:11.148798  323285 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1029 09:11:11.156810  323285 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1029 09:11:11.156826  323285 kubeadm.go:158] found existing configuration files:
	
	I1029 09:11:11.156874  323285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1029 09:11:11.164570  323285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1029 09:11:11.164623  323285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1029 09:11:11.172197  323285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1029 09:11:11.180475  323285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1029 09:11:11.180538  323285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1029 09:11:11.188729  323285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1029 09:11:11.197081  323285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1029 09:11:11.197134  323285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1029 09:11:11.205164  323285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1029 09:11:11.213757  323285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1029 09:11:11.213834  323285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1029 09:11:11.222560  323285 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1029 09:11:11.268456  323285 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1029 09:11:11.268507  323285 kubeadm.go:319] [preflight] Running pre-flight checks
	I1029 09:11:11.290199  323285 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1029 09:11:11.290297  323285 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1029 09:11:11.290361  323285 kubeadm.go:319] OS: Linux
	I1029 09:11:11.290441  323285 kubeadm.go:319] CGROUPS_CPU: enabled
	I1029 09:11:11.290490  323285 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1029 09:11:11.290536  323285 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1029 09:11:11.290625  323285 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1029 09:11:11.290702  323285 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1029 09:11:11.290774  323285 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1029 09:11:11.290840  323285 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1029 09:11:11.290910  323285 kubeadm.go:319] CGROUPS_IO: enabled
	I1029 09:11:11.353151  323285 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1029 09:11:11.353280  323285 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1029 09:11:11.353455  323285 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1029 09:11:11.361607  323285 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1029 09:11:11.641711  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	W1029 09:11:14.140814  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	I1029 09:11:11.363970  323285 out.go:252]   - Generating certificates and keys ...
	I1029 09:11:11.364100  323285 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1029 09:11:11.364205  323285 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1029 09:11:11.568728  323285 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1029 09:11:11.698854  323285 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1029 09:11:12.039747  323285 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1029 09:11:12.129625  323285 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1029 09:11:12.340599  323285 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1029 09:11:12.340797  323285 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-259430] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1029 09:11:12.447881  323285 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1029 09:11:12.448051  323285 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-259430] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1029 09:11:12.809139  323285 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1029 09:11:13.118618  323285 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 09:11:13.421858  323285 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 09:11:13.421937  323285 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 09:11:13.838287  323285 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 09:11:13.908409  323285 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1029 09:11:13.966840  323285 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 09:11:14.294658  323285 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 09:11:14.520651  323285 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 09:11:14.521473  323285 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 09:11:14.525440  323285 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 09:11:16.641154  317625 pod_ready.go:94] pod "coredns-66bc5c9577-qtsxl" is "Ready"
	I1029 09:11:16.641182  317625 pod_ready.go:86] duration metric: took 32.006267628s for pod "coredns-66bc5c9577-qtsxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.644109  317625 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.649400  317625 pod_ready.go:94] pod "etcd-default-k8s-diff-port-017274" is "Ready"
	I1029 09:11:16.649427  317625 pod_ready.go:86] duration metric: took 5.291908ms for pod "etcd-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.651669  317625 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.657129  317625 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-017274" is "Ready"
	I1029 09:11:16.657156  317625 pod_ready.go:86] duration metric: took 5.462345ms for pod "kube-apiserver-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.659534  317625 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.839252  317625 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-017274" is "Ready"
	I1029 09:11:16.839288  317625 pod_ready.go:86] duration metric: took 179.72875ms for pod "kube-controller-manager-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:17.038505  317625 pod_ready.go:83] waiting for pod "kube-proxy-82xcl" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:17.439109  317625 pod_ready.go:94] pod "kube-proxy-82xcl" is "Ready"
	I1029 09:11:17.439143  317625 pod_ready.go:86] duration metric: took 400.60463ms for pod "kube-proxy-82xcl" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:17.638686  317625 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:18.038057  317625 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-017274" is "Ready"
	I1029 09:11:18.038087  317625 pod_ready.go:86] duration metric: took 399.368296ms for pod "kube-scheduler-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:18.038104  317625 pod_ready.go:40] duration metric: took 33.407465789s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:11:18.083317  317625 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:11:18.085224  317625 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-017274" cluster and "default" namespace by default
	I1029 09:11:14.527215  323285 out.go:252]   - Booting up control plane ...
	I1029 09:11:14.527330  323285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 09:11:14.528001  323285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 09:11:14.529019  323285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 09:11:14.543280  323285 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 09:11:14.543401  323285 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1029 09:11:14.550630  323285 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1029 09:11:14.550841  323285 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 09:11:14.550884  323285 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 09:11:14.650739  323285 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1029 09:11:14.650905  323285 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1029 09:11:15.652524  323285 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001987708s
	I1029 09:11:15.655579  323285 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1029 09:11:15.655710  323285 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1029 09:11:15.655837  323285 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1029 09:11:15.655956  323285 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1029 09:11:16.826389  323285 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.170616379s
	I1029 09:11:17.564867  323285 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.909192382s
	I1029 09:11:19.659250  323285 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.003481114s
	I1029 09:11:19.671798  323285 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1029 09:11:19.684260  323285 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1029 09:11:19.699471  323285 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1029 09:11:19.699763  323285 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-259430 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1029 09:11:19.710393  323285 kubeadm.go:319] [bootstrap-token] Using token: etunao.909gsmlonyfps6an
	I1029 09:11:19.712233  323285 out.go:252]   - Configuring RBAC rules ...
	I1029 09:11:19.712362  323285 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1029 09:11:19.717094  323285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1029 09:11:19.726179  323285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1029 09:11:19.730162  323285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1029 09:11:19.733946  323285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1029 09:11:19.737821  323285 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1029 09:11:20.066141  323285 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1029 09:11:20.488826  323285 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1029 09:11:21.065711  323285 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1029 09:11:21.066649  323285 kubeadm.go:319] 
	I1029 09:11:21.066715  323285 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1029 09:11:21.066724  323285 kubeadm.go:319] 
	I1029 09:11:21.066789  323285 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1029 09:11:21.066796  323285 kubeadm.go:319] 
	I1029 09:11:21.066849  323285 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1029 09:11:21.066954  323285 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1029 09:11:21.067064  323285 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1029 09:11:21.067084  323285 kubeadm.go:319] 
	I1029 09:11:21.067165  323285 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1029 09:11:21.067184  323285 kubeadm.go:319] 
	I1029 09:11:21.067246  323285 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1029 09:11:21.067257  323285 kubeadm.go:319] 
	I1029 09:11:21.067324  323285 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1029 09:11:21.067491  323285 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1029 09:11:21.067595  323285 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1029 09:11:21.067606  323285 kubeadm.go:319] 
	I1029 09:11:21.067731  323285 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1029 09:11:21.067854  323285 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1029 09:11:21.067869  323285 kubeadm.go:319] 
	I1029 09:11:21.068015  323285 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token etunao.909gsmlonyfps6an \
	I1029 09:11:21.068175  323285 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac3e7481983f892dc0d8a54808eeb48169ef741e11f757d145550a40a55b8d23 \
	I1029 09:11:21.068206  323285 kubeadm.go:319] 	--control-plane 
	I1029 09:11:21.068228  323285 kubeadm.go:319] 
	I1029 09:11:21.068341  323285 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1029 09:11:21.068355  323285 kubeadm.go:319] 
	I1029 09:11:21.068471  323285 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token etunao.909gsmlonyfps6an \
	I1029 09:11:21.068560  323285 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac3e7481983f892dc0d8a54808eeb48169ef741e11f757d145550a40a55b8d23 
	I1029 09:11:21.072046  323285 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1029 09:11:21.072153  323285 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1029 09:11:21.072178  323285 cni.go:84] Creating CNI manager for ""
	I1029 09:11:21.072201  323285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:11:21.075063  323285 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1029 09:11:21.076333  323285 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1029 09:11:21.080941  323285 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1029 09:11:21.080968  323285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1029 09:11:21.097427  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1029 09:11:21.329871  323285 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 09:11:21.329963  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:21.330018  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-259430 minikube.k8s.io/updated_at=2025_10_29T09_11_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac minikube.k8s.io/name=newest-cni-259430 minikube.k8s.io/primary=true
	I1029 09:11:21.340318  323285 ops.go:34] apiserver oom_adj: -16
	I1029 09:11:21.427541  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:21.927714  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:22.428593  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:22.928549  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:23.427854  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:23.927662  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:24.427970  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:24.927865  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:25.427901  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:25.501188  323285 kubeadm.go:1114] duration metric: took 4.171293414s to wait for elevateKubeSystemPrivileges
	I1029 09:11:25.501228  323285 kubeadm.go:403] duration metric: took 14.397367402s to StartCluster
	I1029 09:11:25.501250  323285 settings.go:142] acquiring lock: {Name:mk07eebd81bddcab3dc3d429be8b09770a1732f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:25.501330  323285 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:11:25.502295  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:25.502553  323285 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:11:25.502565  323285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1029 09:11:25.502588  323285 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:11:25.502688  323285 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-259430"
	I1029 09:11:25.502719  323285 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-259430"
	I1029 09:11:25.502730  323285 addons.go:70] Setting default-storageclass=true in profile "newest-cni-259430"
	I1029 09:11:25.502755  323285 host.go:66] Checking if "newest-cni-259430" exists ...
	I1029 09:11:25.502770  323285 config.go:182] Loaded profile config "newest-cni-259430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:11:25.502782  323285 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-259430"
	I1029 09:11:25.503251  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:25.503349  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:25.504494  323285 out.go:179] * Verifying Kubernetes components...
	I1029 09:11:25.505968  323285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:11:25.527485  323285 addons.go:239] Setting addon default-storageclass=true in "newest-cni-259430"
	I1029 09:11:25.527539  323285 host.go:66] Checking if "newest-cni-259430" exists ...
	I1029 09:11:25.527972  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:25.530438  323285 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:11:25.531693  323285 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:11:25.531717  323285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:11:25.531789  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:25.560504  323285 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:11:25.560528  323285 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:11:25.560591  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:25.562044  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:25.586080  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:25.599802  323285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1029 09:11:25.657575  323285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:11:25.699467  323285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:11:25.704238  323285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:11:25.813033  323285 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1029 09:11:25.813984  323285 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:11:25.814061  323285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:11:26.026073  323285 api_server.go:72] duration metric: took 523.482951ms to wait for apiserver process to appear ...
	I1029 09:11:26.026104  323285 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:11:26.026125  323285 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:11:26.031704  323285 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:11:26.032574  323285 api_server.go:141] control plane version: v1.34.1
	I1029 09:11:26.032596  323285 api_server.go:131] duration metric: took 6.485257ms to wait for apiserver health ...
	I1029 09:11:26.032604  323285 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:11:26.034592  323285 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1029 09:11:26.035774  323285 addons.go:515] duration metric: took 533.185042ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1029 09:11:26.036425  323285 system_pods.go:59] 8 kube-system pods found
	I1029 09:11:26.036459  323285 system_pods.go:61] "coredns-66bc5c9577-k74f5" [d32eecf7-613f-43fe-87b6-1c56dc6f7837] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1029 09:11:26.036469  323285 system_pods.go:61] "etcd-newest-cni-259430" [21bef91b-1e23-4c0b-836a-7d38dbcd158d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:11:26.036477  323285 system_pods.go:61] "kindnet-4555c" [e9503ed8-3583-471b-8ed2-cb19fa55932f] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 09:11:26.036483  323285 system_pods.go:61] "kube-apiserver-newest-cni-259430" [e2aa2d83-bd57-4b42-9f74-cc369442fb48] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:11:26.036489  323285 system_pods.go:61] "kube-controller-manager-newest-cni-259430" [c8b1f927-8450-4b3d-8380-0d74388f7b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:11:26.036493  323285 system_pods.go:61] "kube-proxy-md8mn" [5b216c8f-e72c-44bd-ac4a-4f07213f90bb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:11:26.036499  323285 system_pods.go:61] "kube-scheduler-newest-cni-259430" [6dffb3f4-a5a2-456f-bfe4-34c2a0916645] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:11:26.036510  323285 system_pods.go:61] "storage-provisioner" [b614d976-a2b2-4dff-9276-58ac33de3f70] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1029 09:11:26.036517  323285 system_pods.go:74] duration metric: took 3.906841ms to wait for pod list to return data ...
	I1029 09:11:26.036528  323285 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:11:26.038941  323285 default_sa.go:45] found service account: "default"
	I1029 09:11:26.038970  323285 default_sa.go:55] duration metric: took 2.434992ms for default service account to be created ...
	I1029 09:11:26.038985  323285 kubeadm.go:587] duration metric: took 536.401056ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:11:26.039017  323285 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:11:26.041827  323285 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 09:11:26.041856  323285 node_conditions.go:123] node cpu capacity is 8
	I1029 09:11:26.041871  323285 node_conditions.go:105] duration metric: took 2.848114ms to run NodePressure ...
	I1029 09:11:26.041886  323285 start.go:242] waiting for startup goroutines ...
	I1029 09:11:26.317778  323285 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-259430" context rescaled to 1 replicas
	I1029 09:11:26.317823  323285 start.go:247] waiting for cluster config update ...
	I1029 09:11:26.317834  323285 start.go:256] writing updated cluster config ...
	I1029 09:11:26.318152  323285 ssh_runner.go:195] Run: rm -f paused
	I1029 09:11:26.372618  323285 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:11:26.375584  323285 out.go:179] * Done! kubectl is now configured to use "newest-cni-259430" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 09:11:04 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:04.003300044Z" level=info msg="Started container" PID=1753 containerID=049423ca1d20bd76dbb36190f350135bc9eb94e58ed272ac48e6a5917c1c95a3 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6/dashboard-metrics-scraper id=9db337fe-0d37-41ab-88f3-d72f3d806820 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b1ac86f52b56ec20cc039775ffbcdee17c0f811c4eb3063f25155c595e4012c2
	Oct 29 09:11:04 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:04.940701844Z" level=info msg="Removing container: c6f493b8847c1c14252e2b9ba73d6a88203f11bcd94f6a0b288ce745ad8b4663" id=a85184eb-4171-43a9-bc2e-ec2ae498c9c1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:11:05 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:05.240837264Z" level=info msg="Removed container c6f493b8847c1c14252e2b9ba73d6a88203f11bcd94f6a0b288ce745ad8b4663: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6/dashboard-metrics-scraper" id=a85184eb-4171-43a9-bc2e-ec2ae498c9c1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.94516126Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=51b55299-9a9d-4217-9eab-2f837f2912e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.946161451Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2f3fe5fa-0d0a-4f8c-842e-3d2982c3e0d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.950100941Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=11bb5538-8678-42c2-b6de-4c613719e06e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.950258702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.955150563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.955368423Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/26b1dc5bf57b6834ad9deebb6ad353190ac3b4fbabeefacdd218f37dd8fcf10a/merged/etc/passwd: no such file or directory"
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.955417027Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/26b1dc5bf57b6834ad9deebb6ad353190ac3b4fbabeefacdd218f37dd8fcf10a/merged/etc/group: no such file or directory"
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.956249429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.982151997Z" level=info msg="Created container 9f8b022f6197b575996b89e676302e4eb86553b72ca4f45a653082725e761546: kube-system/storage-provisioner/storage-provisioner" id=11bb5538-8678-42c2-b6de-4c613719e06e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.982826169Z" level=info msg="Starting container: 9f8b022f6197b575996b89e676302e4eb86553b72ca4f45a653082725e761546" id=6bb552ca-feef-4b85-a5d0-b9479171ba47 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.984656556Z" level=info msg="Started container" PID=1767 containerID=9f8b022f6197b575996b89e676302e4eb86553b72ca4f45a653082725e761546 description=kube-system/storage-provisioner/storage-provisioner id=6bb552ca-feef-4b85-a5d0-b9479171ba47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a05601b284ddfb76e8e985c3ddaa9cefd747d00587eb271e46cec2ef1dc2c3cc
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.824390987Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5ed37546-0650-4f97-8f40-5480b374d777 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.825422197Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cf74c26b-8bbc-4ae9-99dc-3109272d1aed name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.82652955Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6/dashboard-metrics-scraper" id=28d0f4f8-e8a3-40bc-90e7-d13ec63d1cf7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.82667634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.832569919Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.833094519Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.862845477Z" level=info msg="Created container f18669e29ce3bae89e47c4d615e489b67abd011cf8fb575159bfbd7cf320ddf3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6/dashboard-metrics-scraper" id=28d0f4f8-e8a3-40bc-90e7-d13ec63d1cf7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.86357276Z" level=info msg="Starting container: f18669e29ce3bae89e47c4d615e489b67abd011cf8fb575159bfbd7cf320ddf3" id=4e74fc90-a0d1-49b9-b051-8dfc301f74c3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.865447279Z" level=info msg="Started container" PID=1803 containerID=f18669e29ce3bae89e47c4d615e489b67abd011cf8fb575159bfbd7cf320ddf3 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6/dashboard-metrics-scraper id=4e74fc90-a0d1-49b9-b051-8dfc301f74c3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b1ac86f52b56ec20cc039775ffbcdee17c0f811c4eb3063f25155c595e4012c2
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.983473062Z" level=info msg="Removing container: 049423ca1d20bd76dbb36190f350135bc9eb94e58ed272ac48e6a5917c1c95a3" id=78f9277f-3c4f-43c5-b476-3f8626ac0383 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.995468867Z" level=info msg="Removed container 049423ca1d20bd76dbb36190f350135bc9eb94e58ed272ac48e6a5917c1c95a3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6/dashboard-metrics-scraper" id=78f9277f-3c4f-43c5-b476-3f8626ac0383 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	f18669e29ce3b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   b1ac86f52b56e       dashboard-metrics-scraper-6ffb444bf9-qgpb6             kubernetes-dashboard
	9f8b022f6197b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   a05601b284ddf       storage-provisioner                                    kube-system
	6a233da0986cd       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   b0550839b9aae       kubernetes-dashboard-855c9754f9-4kfgv                  kubernetes-dashboard
	ba3638e3a9f2a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   ce5eb26bee0cf       busybox                                                default
	b307f64c120f3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   34f269a75a0e4       coredns-66bc5c9577-qtsxl                               kube-system
	622a00b140b2c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           48 seconds ago      Running             kube-proxy                  0                   58d06db2a0823       kube-proxy-82xcl                                       kube-system
	6f632ec2ab17f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   a05601b284ddf       storage-provisioner                                    kube-system
	4ce19f536b3e7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   6657d88a668c3       kindnet-tdtxm                                          kube-system
	7e6fae9cd623c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   b7121fee26dff       kube-scheduler-default-k8s-diff-port-017274            kube-system
	f86c6058a7094       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   59c6e6e7a6b49       kube-apiserver-default-k8s-diff-port-017274            kube-system
	16de8e1e0e29b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   d80c61951efb3       kube-controller-manager-default-k8s-diff-port-017274   kube-system
	bf3d3afb886dc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   dff9b51d4187a       etcd-default-k8s-diff-port-017274                      kube-system
	
	
	==> coredns [b307f64c120f3819158edb444be7e97b99be83763b6b415d2244b39fe00046f9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53623 - 50449 "HINFO IN 4280449570087446041.7047790352421022031. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.082582763s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-017274
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-017274
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=default-k8s-diff-port-017274
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_09_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:09:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-017274
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:11:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:11:14 +0000   Wed, 29 Oct 2025 09:09:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:11:14 +0000   Wed, 29 Oct 2025 09:09:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:11:14 +0000   Wed, 29 Oct 2025 09:09:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:11:14 +0000   Wed, 29 Oct 2025 09:10:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-017274
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                c5ea9dce-72e7-4834-9b46-0ce5130939cc
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-66bc5c9577-qtsxl                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-default-k8s-diff-port-017274                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         107s
	  kube-system                 kindnet-tdtxm                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-default-k8s-diff-port-017274             250m (3%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-017274    200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-82xcl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-default-k8s-diff-port-017274             100m (1%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qgpb6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4kfgv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  112s (x8 over 112s)  kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s (x8 over 112s)  kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s (x8 over 112s)  kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     107s                 kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  107s                 kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s                 kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 107s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           103s                 node-controller  Node default-k8s-diff-port-017274 event: Registered Node default-k8s-diff-port-017274 in Controller
	  Normal  NodeReady                90s                  kubelet          Node default-k8s-diff-port-017274 status is now: NodeReady
	  Normal  Starting                 53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)    kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)    kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)    kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                  node-controller  Node default-k8s-diff-port-017274 event: Registered Node default-k8s-diff-port-017274 in Controller
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[  +7.860471] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea d1 bc 55 fa d3 08 06
	[  +0.057230] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[  +7.379065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 86 de 12 5c b8 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[Oct29 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 3e ab f0 10 3c 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 16 ea 00 8c 1d 08 06
	[  +4.650960] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 a3 db 56 0e bd 08 06
	[  +0.000357] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[ +10.158654] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	[Oct29 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 62 44 12 a2 6e 08 06
	[  +0.000472] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	
	
	==> etcd [bf3d3afb886dcc98b83711dba516b774e5c1d649904cdd75ab5a786f4f65ac21] <==
	{"level":"warn","ts":"2025-10-29T09:10:42.365615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.373112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.379898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.386720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.394168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.402568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.411150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.419941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.430877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.438041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.445152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.452918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.460252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.467084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.474124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.481897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.490112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.513354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.527631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.574641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58422","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-29T09:11:03.615459Z","caller":"traceutil/trace.go:172","msg":"trace[365937369] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"110.997339ms","start":"2025-10-29T09:11:03.504432Z","end":"2025-10-29T09:11:03.615429Z","steps":["trace[365937369] 'process raft request'  (duration: 110.853397ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T09:11:03.862585Z","caller":"traceutil/trace.go:172","msg":"trace[245654951] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"115.223476ms","start":"2025-10-29T09:11:03.747341Z","end":"2025-10-29T09:11:03.862565Z","steps":["trace[245654951] 'process raft request'  (duration: 115.038389ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T09:11:04.038446Z","caller":"traceutil/trace.go:172","msg":"trace[1934644612] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"110.053765ms","start":"2025-10-29T09:11:03.928370Z","end":"2025-10-29T09:11:04.038423Z","steps":["trace[1934644612] 'process raft request'  (duration: 109.772472ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T09:11:04.203222Z","caller":"traceutil/trace.go:172","msg":"trace[1534120381] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"106.659062ms","start":"2025-10-29T09:11:04.096536Z","end":"2025-10-29T09:11:04.203195Z","steps":["trace[1534120381] 'process raft request'  (duration: 93.620768ms)","trace[1534120381] 'compare'  (duration: 12.900719ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-29T09:11:04.535749Z","caller":"traceutil/trace.go:172","msg":"trace[1075526035] transaction","detail":"{read_only:false; response_revision:642; number_of_response:1; }","duration":"112.97477ms","start":"2025-10-29T09:11:04.422654Z","end":"2025-10-29T09:11:04.535629Z","steps":["trace[1075526035] 'process raft request'  (duration: 112.766353ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:11:33 up 54 min,  0 user,  load average: 4.54, 4.19, 2.70
	Linux default-k8s-diff-port-017274 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4ce19f536b3e79a00539eda45389baa388b6e72af2f5f3735054624a5e24cc23] <==
	I1029 09:10:44.450781       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:10:44.451043       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1029 09:10:44.548076       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:10:44.548105       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:10:44.548127       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:10:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:10:44.708798       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:10:44.708977       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:10:44.709010       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:10:44.709254       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:10:45.048128       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:10:45.048156       1 metrics.go:72] Registering metrics
	I1029 09:10:45.048234       1 controller.go:711] "Syncing nftables rules"
	I1029 09:10:54.709097       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1029 09:10:54.709155       1 main.go:301] handling current node
	I1029 09:11:04.709384       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1029 09:11:04.709434       1 main.go:301] handling current node
	I1029 09:11:14.709490       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1029 09:11:14.709550       1 main.go:301] handling current node
	I1029 09:11:24.709960       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1029 09:11:24.710036       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f86c6058a709440c09ee461898fae3daf70e692c424c5d7e8f093887f7ac3e6e] <==
	I1029 09:10:43.099675       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 09:10:43.099799       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1029 09:10:43.099846       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:10:43.100357       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:10:43.100372       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:10:43.100379       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:10:43.100386       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:10:43.105032       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1029 09:10:43.105943       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 09:10:43.121927       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1029 09:10:43.127671       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:10:43.133284       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1029 09:10:43.133327       1 policy_source.go:240] refreshing policies
	I1029 09:10:43.220896       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:10:43.386884       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:10:43.419081       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:10:43.445097       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:10:43.455720       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:10:43.464455       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:10:43.506614       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.32.103"}
	I1029 09:10:43.519128       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.107.50"}
	I1029 09:10:44.001501       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:10:46.097827       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:10:46.149468       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:10:46.348708       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [16de8e1e0e29b4272b860675eb3990e121068d5daeaec00a854feb51ab6b59c6] <==
	I1029 09:10:45.709582       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 09:10:45.710586       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 09:10:45.710632       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 09:10:45.713831       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:10:45.716149       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1029 09:10:45.744717       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 09:10:45.744746       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:10:45.744717       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:10:45.744744       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1029 09:10:45.744846       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1029 09:10:45.745049       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1029 09:10:45.745452       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1029 09:10:45.746724       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 09:10:45.746813       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 09:10:45.746945       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-017274"
	I1029 09:10:45.747013       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1029 09:10:45.749017       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 09:10:45.749477       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:10:45.750806       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:10:45.753034       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 09:10:45.755977       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:10:45.758423       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:10:45.760374       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:10:45.762947       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 09:10:45.770345       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [622a00b140b2c0609cf8cf6561c828e18ba9000776cbc4c975473747329412e8] <==
	I1029 09:10:44.261605       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:10:44.332167       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:10:44.433216       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:10:44.433262       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1029 09:10:44.433380       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:10:44.453690       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:10:44.453748       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:10:44.459007       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:10:44.459398       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:10:44.459428       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:10:44.460747       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:10:44.460780       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:10:44.460776       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:10:44.460800       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:10:44.460831       1 config.go:200] "Starting service config controller"
	I1029 09:10:44.460838       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:10:44.460847       1 config.go:309] "Starting node config controller"
	I1029 09:10:44.460866       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:10:44.460875       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:10:44.561149       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:10:44.561235       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:10:44.561262       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7e6fae9cd623cd88656a304b8514161f51b751e23f1918df0f51d122620ec416] <==
	I1029 09:10:41.760848       1 serving.go:386] Generated self-signed cert in-memory
	W1029 09:10:43.017360       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:10:43.017396       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:10:43.017408       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:10:43.017417       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:10:43.074051       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:10:43.074097       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:10:43.078059       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:10:43.078545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:10:43.078897       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:10:43.080093       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:10:43.181047       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:10:46 default-k8s-diff-port-017274 kubelet[721]: I1029 09:10:46.372217     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsqd8\" (UniqueName: \"kubernetes.io/projected/0df4a7f6-44d6-434b-b4a6-15ecc6298dc6-kube-api-access-fsqd8\") pod \"dashboard-metrics-scraper-6ffb444bf9-qgpb6\" (UID: \"0df4a7f6-44d6-434b-b4a6-15ecc6298dc6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6"
	Oct 29 09:10:46 default-k8s-diff-port-017274 kubelet[721]: I1029 09:10:46.372243     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0df4a7f6-44d6-434b-b4a6-15ecc6298dc6-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-qgpb6\" (UID: \"0df4a7f6-44d6-434b-b4a6-15ecc6298dc6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6"
	Oct 29 09:10:46 default-k8s-diff-port-017274 kubelet[721]: I1029 09:10:46.372322     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1a5c8fb6-1c63-42d2-8b52-de30e9a56c2c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-4kfgv\" (UID: \"1a5c8fb6-1c63-42d2-8b52-de30e9a56c2c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4kfgv"
	Oct 29 09:10:52 default-k8s-diff-port-017274 kubelet[721]: I1029 09:10:52.877763     721 scope.go:117] "RemoveContainer" containerID="540f863b4d80ce46e133380f1883aeb49643664d9b2fe555fc7a9f2911a9db40"
	Oct 29 09:10:52 default-k8s-diff-port-017274 kubelet[721]: I1029 09:10:52.892123     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4kfgv" podStartSLOduration=3.661496395 podStartE2EDuration="6.892099995s" podCreationTimestamp="2025-10-29 09:10:46 +0000 UTC" firstStartedPulling="2025-10-29 09:10:46.602408428 +0000 UTC m=+5.875219143" lastFinishedPulling="2025-10-29 09:10:49.833012028 +0000 UTC m=+9.105822743" observedRunningTime="2025-10-29 09:10:50.885411077 +0000 UTC m=+10.158221799" watchObservedRunningTime="2025-10-29 09:10:52.892099995 +0000 UTC m=+12.164910721"
	Oct 29 09:10:53 default-k8s-diff-port-017274 kubelet[721]: I1029 09:10:53.882473     721 scope.go:117] "RemoveContainer" containerID="540f863b4d80ce46e133380f1883aeb49643664d9b2fe555fc7a9f2911a9db40"
	Oct 29 09:10:53 default-k8s-diff-port-017274 kubelet[721]: I1029 09:10:53.882635     721 scope.go:117] "RemoveContainer" containerID="c6f493b8847c1c14252e2b9ba73d6a88203f11bcd94f6a0b288ce745ad8b4663"
	Oct 29 09:10:53 default-k8s-diff-port-017274 kubelet[721]: E1029 09:10:53.882854     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qgpb6_kubernetes-dashboard(0df4a7f6-44d6-434b-b4a6-15ecc6298dc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6" podUID="0df4a7f6-44d6-434b-b4a6-15ecc6298dc6"
	Oct 29 09:10:54 default-k8s-diff-port-017274 kubelet[721]: I1029 09:10:54.887387     721 scope.go:117] "RemoveContainer" containerID="c6f493b8847c1c14252e2b9ba73d6a88203f11bcd94f6a0b288ce745ad8b4663"
	Oct 29 09:10:54 default-k8s-diff-port-017274 kubelet[721]: E1029 09:10:54.887607     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qgpb6_kubernetes-dashboard(0df4a7f6-44d6-434b-b4a6-15ecc6298dc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6" podUID="0df4a7f6-44d6-434b-b4a6-15ecc6298dc6"
	Oct 29 09:11:03 default-k8s-diff-port-017274 kubelet[721]: I1029 09:11:03.497121     721 scope.go:117] "RemoveContainer" containerID="c6f493b8847c1c14252e2b9ba73d6a88203f11bcd94f6a0b288ce745ad8b4663"
	Oct 29 09:11:04 default-k8s-diff-port-017274 kubelet[721]: I1029 09:11:04.914869     721 scope.go:117] "RemoveContainer" containerID="c6f493b8847c1c14252e2b9ba73d6a88203f11bcd94f6a0b288ce745ad8b4663"
	Oct 29 09:11:04 default-k8s-diff-port-017274 kubelet[721]: I1029 09:11:04.915096     721 scope.go:117] "RemoveContainer" containerID="049423ca1d20bd76dbb36190f350135bc9eb94e58ed272ac48e6a5917c1c95a3"
	Oct 29 09:11:04 default-k8s-diff-port-017274 kubelet[721]: E1029 09:11:04.915333     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qgpb6_kubernetes-dashboard(0df4a7f6-44d6-434b-b4a6-15ecc6298dc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6" podUID="0df4a7f6-44d6-434b-b4a6-15ecc6298dc6"
	Oct 29 09:11:13 default-k8s-diff-port-017274 kubelet[721]: I1029 09:11:13.496921     721 scope.go:117] "RemoveContainer" containerID="049423ca1d20bd76dbb36190f350135bc9eb94e58ed272ac48e6a5917c1c95a3"
	Oct 29 09:11:13 default-k8s-diff-port-017274 kubelet[721]: E1029 09:11:13.497116     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qgpb6_kubernetes-dashboard(0df4a7f6-44d6-434b-b4a6-15ecc6298dc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6" podUID="0df4a7f6-44d6-434b-b4a6-15ecc6298dc6"
	Oct 29 09:11:14 default-k8s-diff-port-017274 kubelet[721]: I1029 09:11:14.944727     721 scope.go:117] "RemoveContainer" containerID="6f632ec2ab17f16383342be9b493a5028653a719ddcd23d6ebf0bf9ef6d6ada4"
	Oct 29 09:11:26 default-k8s-diff-port-017274 kubelet[721]: I1029 09:11:26.823798     721 scope.go:117] "RemoveContainer" containerID="049423ca1d20bd76dbb36190f350135bc9eb94e58ed272ac48e6a5917c1c95a3"
	Oct 29 09:11:26 default-k8s-diff-port-017274 kubelet[721]: I1029 09:11:26.981969     721 scope.go:117] "RemoveContainer" containerID="049423ca1d20bd76dbb36190f350135bc9eb94e58ed272ac48e6a5917c1c95a3"
	Oct 29 09:11:26 default-k8s-diff-port-017274 kubelet[721]: I1029 09:11:26.982273     721 scope.go:117] "RemoveContainer" containerID="f18669e29ce3bae89e47c4d615e489b67abd011cf8fb575159bfbd7cf320ddf3"
	Oct 29 09:11:26 default-k8s-diff-port-017274 kubelet[721]: E1029 09:11:26.982510     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qgpb6_kubernetes-dashboard(0df4a7f6-44d6-434b-b4a6-15ecc6298dc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6" podUID="0df4a7f6-44d6-434b-b4a6-15ecc6298dc6"
	Oct 29 09:11:30 default-k8s-diff-port-017274 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:11:30 default-k8s-diff-port-017274 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:11:30 default-k8s-diff-port-017274 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 29 09:11:30 default-k8s-diff-port-017274 systemd[1]: kubelet.service: Consumed 1.837s CPU time.
	
	
	==> kubernetes-dashboard [6a233da0986cdd4b355e6ad5ef2ef59ef1cc2366e325cd151f90f0e07579e1d5] <==
	2025/10/29 09:10:49 Starting overwatch
	2025/10/29 09:10:49 Using namespace: kubernetes-dashboard
	2025/10/29 09:10:49 Using in-cluster config to connect to apiserver
	2025/10/29 09:10:49 Using secret token for csrf signing
	2025/10/29 09:10:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 09:10:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 09:10:49 Successful initial request to the apiserver, version: v1.34.1
	2025/10/29 09:10:49 Generating JWE encryption key
	2025/10/29 09:10:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 09:10:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 09:10:50 Initializing JWE encryption key from synchronized object
	2025/10/29 09:10:50 Creating in-cluster Sidecar client
	2025/10/29 09:10:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:10:50 Serving insecurely on HTTP port: 9090
	2025/10/29 09:11:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6f632ec2ab17f16383342be9b493a5028653a719ddcd23d6ebf0bf9ef6d6ada4] <==
	I1029 09:10:44.220947       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1029 09:11:14.223711       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9f8b022f6197b575996b89e676302e4eb86553b72ca4f45a653082725e761546] <==
	I1029 09:11:14.997381       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:11:15.004746       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:11:15.004798       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:11:15.007362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:18.462725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:22.723858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:26.322540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:29.376598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:32.399349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:32.405470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:11:32.405680       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:11:32.406228       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc8dd318-9670-4d4d-99bd-9ed78324108f", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-017274_71b3aaf8-a893-4acc-810a-439b6056f8f3 became leader
	I1029 09:11:32.406356       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-017274_71b3aaf8-a893-4acc-810a-439b6056f8f3!
	W1029 09:11:32.412297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:32.417582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:11:32.506945       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-017274_71b3aaf8-a893-4acc-810a-439b6056f8f3!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-017274 -n default-k8s-diff-port-017274
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-017274 -n default-k8s-diff-port-017274: exit status 2 (344.335769ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-017274 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-017274
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-017274:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb",
	        "Created": "2025-10-29T09:09:32.123718192Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 317827,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:10:34.5663352Z",
	            "FinishedAt": "2025-10-29T09:10:33.60337614Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb/hostname",
	        "HostsPath": "/var/lib/docker/containers/7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb/hosts",
	        "LogPath": "/var/lib/docker/containers/7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb/7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb-json.log",
	        "Name": "/default-k8s-diff-port-017274",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-017274:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-017274",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7cabc8999167dbb3e8f0a25e3a42281412290f3687daccc52258e460dbd5bcbb",
	                "LowerDir": "/var/lib/docker/overlay2/117a7a2ef77d077fb877fd0c4a60a9815c28a651245a5dc97bd62489d2fb82c5-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/117a7a2ef77d077fb877fd0c4a60a9815c28a651245a5dc97bd62489d2fb82c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/117a7a2ef77d077fb877fd0c4a60a9815c28a651245a5dc97bd62489d2fb82c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/117a7a2ef77d077fb877fd0c4a60a9815c28a651245a5dc97bd62489d2fb82c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-017274",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-017274/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-017274",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-017274",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-017274",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0696f90b06524da7694148a9be743a1d593fe477b6d899eabcb52f64512155b3",
	            "SandboxKey": "/var/run/docker/netns/0696f90b0652",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-017274": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:eb:f5:17:cb:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3eee94d37a532f968084ecba10a40919f575531a63b06a3b1433848fa7502a53",
	                    "EndpointID": "c411a7ebd1b61fa0c71632154653881debf500c761abd6a89c76f4d9207ce35e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-017274",
	                        "7cabc8999167"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-017274 -n default-k8s-diff-port-017274
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-017274 -n default-k8s-diff-port-017274: exit status 2 (337.051194ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-017274 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-017274 logs -n 25: (1.137538278s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-834228 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p embed-certs-834228 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p no-preload-043790 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p no-preload-043790 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-017274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-017274 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-017274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:11 UTC │
	│ image   │ old-k8s-version-096492 image list --format=json                                                                                                                                                                                               │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p old-k8s-version-096492 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ image   │ embed-certs-834228 image list --format=json                                                                                                                                                                                                   │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p embed-certs-834228 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ image   │ no-preload-043790 image list --format=json                                                                                                                                                                                                    │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p no-preload-043790 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p newest-cni-259430 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p embed-certs-834228                                                                                                                                                                                                                         │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p no-preload-043790                                                                                                                                                                                                                          │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p embed-certs-834228                                                                                                                                                                                                                         │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p no-preload-043790                                                                                                                                                                                                                          │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ addons  │ enable metrics-server -p newest-cni-259430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │                     │
	│ stop    │ -p newest-cni-259430 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │                     │
	│ image   │ default-k8s-diff-port-017274 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ pause   │ -p default-k8s-diff-port-017274 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:10:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:10:59.394267  323285 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:10:59.394622  323285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:59.394635  323285 out.go:374] Setting ErrFile to fd 2...
	I1029 09:10:59.394640  323285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:10:59.394949  323285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:10:59.395669  323285 out.go:368] Setting JSON to false
	I1029 09:10:59.397426  323285 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3207,"bootTime":1761725852,"procs":353,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:10:59.397490  323285 start.go:143] virtualization: kvm guest
	I1029 09:10:59.399709  323285 out.go:179] * [newest-cni-259430] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:10:59.401275  323285 notify.go:221] Checking for updates...
	I1029 09:10:59.401303  323285 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:10:59.402811  323285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:10:59.404227  323285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:10:59.405575  323285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:10:59.406888  323285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:10:59.408222  323285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:10:59.410015  323285 config.go:182] Loaded profile config "default-k8s-diff-port-017274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410148  323285 config.go:182] Loaded profile config "embed-certs-834228": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410263  323285 config.go:182] Loaded profile config "no-preload-043790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:10:59.410378  323285 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:10:59.435730  323285 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:10:59.435827  323285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:59.503060  323285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-29 09:10:59.489541208 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:59.503182  323285 docker.go:319] overlay module found
	I1029 09:10:59.505269  323285 out.go:179] * Using the docker driver based on user configuration
	I1029 09:10:59.506723  323285 start.go:309] selected driver: docker
	I1029 09:10:59.506741  323285 start.go:930] validating driver "docker" against <nil>
	I1029 09:10:59.506755  323285 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:10:59.507436  323285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:10:59.587780  323285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-29 09:10:59.571693978 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:10:59.588075  323285 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1029 09:10:59.588122  323285 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1029 09:10:59.588720  323285 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:10:59.590863  323285 out.go:179] * Using Docker driver with root privileges
	I1029 09:10:59.592506  323285 cni.go:84] Creating CNI manager for ""
	I1029 09:10:59.592592  323285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:10:59.592606  323285 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:10:59.592730  323285 start.go:353] cluster config:
	{Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:10:59.594390  323285 out.go:179] * Starting "newest-cni-259430" primary control-plane node in "newest-cni-259430" cluster
	I1029 09:10:59.595763  323285 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:10:59.597231  323285 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:10:59.598574  323285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:10:59.598631  323285 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:10:59.598649  323285 cache.go:59] Caching tarball of preloaded images
	I1029 09:10:59.598672  323285 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:10:59.598768  323285 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:10:59.598779  323285 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:10:59.598919  323285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json ...
	I1029 09:10:59.598949  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json: {Name:mked5dfa4485c424df381c0f3cdc9d7d7ae817f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:10:59.625501  323285 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:10:59.625521  323285 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:10:59.625543  323285 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:10:59.625570  323285 start.go:360] acquireMachinesLock for newest-cni-259430: {Name:mk9f7a4924e0dc30dd9007c8d213cb8c4076ee8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:10:59.625670  323285 start.go:364] duration metric: took 83.48µs to acquireMachinesLock for "newest-cni-259430"
	I1029 09:10:59.625695  323285 start.go:93] Provisioning new machine with config: &{Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:10:59.625758  323285 start.go:125] createHost starting for "" (driver="docker")
	W1029 09:11:00.144468  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	W1029 09:11:02.642293  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	I1029 09:10:59.627620  323285 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1029 09:10:59.627853  323285 start.go:159] libmachine.API.Create for "newest-cni-259430" (driver="docker")
	I1029 09:10:59.627883  323285 client.go:173] LocalClient.Create starting
	I1029 09:10:59.627960  323285 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem
	I1029 09:10:59.628018  323285 main.go:143] libmachine: Decoding PEM data...
	I1029 09:10:59.628045  323285 main.go:143] libmachine: Parsing certificate...
	I1029 09:10:59.628095  323285 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem
	I1029 09:10:59.628122  323285 main.go:143] libmachine: Decoding PEM data...
	I1029 09:10:59.628138  323285 main.go:143] libmachine: Parsing certificate...
	I1029 09:10:59.628554  323285 cli_runner.go:164] Run: docker network inspect newest-cni-259430 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1029 09:10:59.648491  323285 cli_runner.go:211] docker network inspect newest-cni-259430 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1029 09:10:59.648580  323285 network_create.go:284] running [docker network inspect newest-cni-259430] to gather additional debugging logs...
	I1029 09:10:59.648603  323285 cli_runner.go:164] Run: docker network inspect newest-cni-259430
	W1029 09:10:59.670427  323285 cli_runner.go:211] docker network inspect newest-cni-259430 returned with exit code 1
	I1029 09:10:59.670462  323285 network_create.go:287] error running [docker network inspect newest-cni-259430]: docker network inspect newest-cni-259430: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-259430 not found
	I1029 09:10:59.670476  323285 network_create.go:289] output of [docker network inspect newest-cni-259430]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-259430 not found
	
	** /stderr **
	I1029 09:10:59.670560  323285 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:10:59.691834  323285 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b27c046ec42 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:96:bc:cb:4a:50:f2} reservation:<nil>}
	I1029 09:10:59.692456  323285 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0c15025939eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:79:05:d8:32:73} reservation:<nil>}
	I1029 09:10:59.693254  323285 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5e92a9c19423 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:ec:bb:72:ab:23} reservation:<nil>}
	I1029 09:10:59.693813  323285 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-86d19029abe0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:37:1e:54:39:51} reservation:<nil>}
	I1029 09:10:59.694835  323285 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f10110}
	I1029 09:10:59.694867  323285 network_create.go:124] attempt to create docker network newest-cni-259430 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1029 09:10:59.694938  323285 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-259430 newest-cni-259430
	I1029 09:10:59.769631  323285 network_create.go:108] docker network newest-cni-259430 192.168.85.0/24 created
	I1029 09:10:59.769672  323285 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-259430" container
	I1029 09:10:59.769753  323285 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1029 09:10:59.791054  323285 cli_runner.go:164] Run: docker volume create newest-cni-259430 --label name.minikube.sigs.k8s.io=newest-cni-259430 --label created_by.minikube.sigs.k8s.io=true
	I1029 09:10:59.815466  323285 oci.go:103] Successfully created a docker volume newest-cni-259430
	I1029 09:10:59.815571  323285 cli_runner.go:164] Run: docker run --rm --name newest-cni-259430-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-259430 --entrypoint /usr/bin/test -v newest-cni-259430:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1029 09:11:00.296980  323285 oci.go:107] Successfully prepared a docker volume newest-cni-259430
	I1029 09:11:00.297051  323285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:11:00.297213  323285 kic.go:194] Starting extracting preloaded images to volume ...
	I1029 09:11:00.297322  323285 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-259430:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1029 09:11:04.712172  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	W1029 09:11:07.141484  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	W1029 09:11:09.142117  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	I1029 09:11:05.253096  323285 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-259430:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.955700802s)
	I1029 09:11:05.253129  323285 kic.go:203] duration metric: took 4.955930157s to extract preloaded images to volume ...
	W1029 09:11:05.253214  323285 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1029 09:11:05.253260  323285 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1029 09:11:05.253319  323285 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1029 09:11:05.315847  323285 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-259430 --name newest-cni-259430 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-259430 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-259430 --network newest-cni-259430 --ip 192.168.85.2 --volume newest-cni-259430:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1029 09:11:05.869187  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Running}}
	I1029 09:11:05.893258  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:05.916213  323285 cli_runner.go:164] Run: docker exec newest-cni-259430 stat /var/lib/dpkg/alternatives/iptables
	I1029 09:11:05.978806  323285 oci.go:144] the created container "newest-cni-259430" has a running status.
	I1029 09:11:05.978874  323285 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa...
	I1029 09:11:06.219653  323285 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1029 09:11:06.545636  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:06.569771  323285 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1029 09:11:06.569799  323285 kic_runner.go:114] Args: [docker exec --privileged newest-cni-259430 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1029 09:11:06.628943  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:06.652327  323285 machine.go:94] provisionDockerMachine start ...
	I1029 09:11:06.652444  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:06.681514  323285 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:06.681819  323285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1029 09:11:06.681843  323285 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:11:06.838511  323285 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-259430
	
	I1029 09:11:06.838546  323285 ubuntu.go:182] provisioning hostname "newest-cni-259430"
	I1029 09:11:06.838634  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:06.859040  323285 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:06.859350  323285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1029 09:11:06.859374  323285 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-259430 && echo "newest-cni-259430" | sudo tee /etc/hostname
	I1029 09:11:07.013620  323285 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-259430
	
	I1029 09:11:07.013721  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:07.037196  323285 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:07.037409  323285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1029 09:11:07.037428  323285 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-259430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-259430/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-259430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:11:07.183951  323285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:11:07.184022  323285 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-3727/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-3727/.minikube}
	I1029 09:11:07.184048  323285 ubuntu.go:190] setting up certificates
	I1029 09:11:07.184060  323285 provision.go:84] configureAuth start
	I1029 09:11:07.184115  323285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-259430
	I1029 09:11:07.202495  323285 provision.go:143] copyHostCerts
	I1029 09:11:07.202577  323285 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem, removing ...
	I1029 09:11:07.202592  323285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem
	I1029 09:11:07.202673  323285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem (1078 bytes)
	I1029 09:11:07.202793  323285 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem, removing ...
	I1029 09:11:07.202805  323285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem
	I1029 09:11:07.202849  323285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem (1123 bytes)
	I1029 09:11:07.202933  323285 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem, removing ...
	I1029 09:11:07.202943  323285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem
	I1029 09:11:07.202984  323285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem (1679 bytes)
	I1029 09:11:07.203078  323285 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem org=jenkins.newest-cni-259430 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-259430]
	I1029 09:11:07.395413  323285 provision.go:177] copyRemoteCerts
	I1029 09:11:07.395479  323285 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:11:07.395531  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:07.414871  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:07.517040  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1029 09:11:07.538399  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1029 09:11:07.557923  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:11:07.577096  323285 provision.go:87] duration metric: took 393.019887ms to configureAuth
	I1029 09:11:07.577128  323285 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:11:07.577309  323285 config.go:182] Loaded profile config "newest-cni-259430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:11:07.577427  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:07.597565  323285 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:07.597783  323285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1029 09:11:07.597799  323285 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:11:07.865697  323285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:11:07.865723  323285 machine.go:97] duration metric: took 1.213371631s to provisionDockerMachine
	I1029 09:11:07.865734  323285 client.go:176] duration metric: took 8.237846029s to LocalClient.Create
	I1029 09:11:07.865755  323285 start.go:167] duration metric: took 8.237903765s to libmachine.API.Create "newest-cni-259430"
	I1029 09:11:07.865764  323285 start.go:293] postStartSetup for "newest-cni-259430" (driver="docker")
	I1029 09:11:07.865778  323285 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:11:07.865871  323285 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:11:07.865931  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:07.885321  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:07.991029  323285 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:11:07.994753  323285 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:11:07.994789  323285 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:11:07.994799  323285 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/addons for local assets ...
	I1029 09:11:07.994848  323285 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/files for local assets ...
	I1029 09:11:07.994930  323285 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem -> 72182.pem in /etc/ssl/certs
	I1029 09:11:07.995049  323285 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:11:08.003392  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:11:08.025461  323285 start.go:296] duration metric: took 159.680734ms for postStartSetup
	I1029 09:11:08.025834  323285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-259430
	I1029 09:11:08.047276  323285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json ...
	I1029 09:11:08.047502  323285 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:11:08.047547  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:08.066779  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:08.172233  323285 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:11:08.177185  323285 start.go:128] duration metric: took 8.551412166s to createHost
	I1029 09:11:08.177213  323285 start.go:83] releasing machines lock for "newest-cni-259430", held for 8.551530554s
	I1029 09:11:08.177283  323285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-259430
	I1029 09:11:08.197450  323285 ssh_runner.go:195] Run: cat /version.json
	I1029 09:11:08.197522  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:08.197562  323285 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:11:08.197635  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:08.217275  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:08.217726  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:08.376049  323285 ssh_runner.go:195] Run: systemctl --version
	I1029 09:11:08.383134  323285 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:11:08.422212  323285 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:11:08.427525  323285 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:11:08.427605  323285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:11:08.464435  323285 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1029 09:11:08.464463  323285 start.go:496] detecting cgroup driver to use...
	I1029 09:11:08.464495  323285 detect.go:190] detected "systemd" cgroup driver on host os
	I1029 09:11:08.464546  323285 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:11:08.481110  323285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:11:08.494209  323285 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:11:08.494260  323285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:11:08.511612  323285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:11:08.530553  323285 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:11:08.621566  323285 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:11:08.726166  323285 docker.go:234] disabling docker service ...
	I1029 09:11:08.726224  323285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:11:08.746348  323285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:11:08.760338  323285 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:11:08.858295  323285 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:11:08.943579  323285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:11:08.957200  323285 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:11:08.972520  323285 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:11:08.972577  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:08.983843  323285 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1029 09:11:08.983921  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:08.993498  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:09.003269  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:09.014275  323285 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:11:09.023507  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:09.033766  323285 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:09.051114  323285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:09.061145  323285 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:11:09.069157  323285 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:11:09.078220  323285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:11:09.173060  323285 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:11:09.290124  323285 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:11:09.290180  323285 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:11:09.294386  323285 start.go:564] Will wait 60s for crictl version
	I1029 09:11:09.294446  323285 ssh_runner.go:195] Run: which crictl
	I1029 09:11:09.298964  323285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:11:09.328014  323285 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:11:09.328085  323285 ssh_runner.go:195] Run: crio --version
	I1029 09:11:09.356771  323285 ssh_runner.go:195] Run: crio --version
	I1029 09:11:09.388520  323285 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:11:09.389795  323285 cli_runner.go:164] Run: docker network inspect newest-cni-259430 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:11:09.408274  323285 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1029 09:11:09.412583  323285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:11:09.424803  323285 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1029 09:11:09.426052  323285 kubeadm.go:884] updating cluster {Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:11:09.426218  323285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:11:09.426300  323285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:11:09.460542  323285 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:11:09.460563  323285 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:11:09.460614  323285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:11:09.487044  323285 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:11:09.487068  323285 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:11:09.487079  323285 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1029 09:11:09.487186  323285 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-259430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:11:09.487269  323285 ssh_runner.go:195] Run: crio config
	I1029 09:11:09.534905  323285 cni.go:84] Creating CNI manager for ""
	I1029 09:11:09.534931  323285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:11:09.534948  323285 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1029 09:11:09.534974  323285 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-259430 NodeName:newest-cni-259430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:11:09.535132  323285 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-259430"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:11:09.535193  323285 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:11:09.543772  323285 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:11:09.543833  323285 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:11:09.552123  323285 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1029 09:11:09.565265  323285 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:11:09.581711  323285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1029 09:11:09.595396  323285 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:11:09.599644  323285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:11:09.610487  323285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:11:09.692291  323285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:11:09.726091  323285 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430 for IP: 192.168.85.2
	I1029 09:11:09.726118  323285 certs.go:195] generating shared ca certs ...
	I1029 09:11:09.726141  323285 certs.go:227] acquiring lock for ca certs: {Name:mk2fcaaead4b0fcf1dc2cfc80d95b3cc12092f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:09.726315  323285 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key
	I1029 09:11:09.726395  323285 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key
	I1029 09:11:09.726414  323285 certs.go:257] generating profile certs ...
	I1029 09:11:09.726496  323285 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.key
	I1029 09:11:09.726515  323285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.crt with IP's: []
	I1029 09:11:09.952951  323285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.crt ...
	I1029 09:11:09.952982  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.crt: {Name:mk4c95155e122c467607b07172eef79936ce7dbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:09.953175  323285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.key ...
	I1029 09:11:09.953188  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.key: {Name:mk823250b94fe9a0154aa07226f6d7d2d7183a44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:09.953268  323285 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key.64cd47c3
	I1029 09:11:09.953284  323285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt.64cd47c3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1029 09:11:10.526658  323285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt.64cd47c3 ...
	I1029 09:11:10.526687  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt.64cd47c3: {Name:mk38b00ad6c7cfbe495c3451bae68542fb6d0084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:10.526859  323285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key.64cd47c3 ...
	I1029 09:11:10.526874  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key.64cd47c3: {Name:mk4e442214473ed9f59e8f778fdf753552f389cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:10.526956  323285 certs.go:382] copying /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt.64cd47c3 -> /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt
	I1029 09:11:10.527047  323285 certs.go:386] copying /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key.64cd47c3 -> /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key
	I1029 09:11:10.527110  323285 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.key
	I1029 09:11:10.527127  323285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.crt with IP's: []
	I1029 09:11:10.693534  323285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.crt ...
	I1029 09:11:10.693566  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.crt: {Name:mk99151503057a9b4735d9a33bf9f994dbe8bcb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:10.693747  323285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.key ...
	I1029 09:11:10.693761  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.key: {Name:mkb37888acb09fb2cfa4458e6f93e0fa1bd40cb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:10.693934  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem (1338 bytes)
	W1029 09:11:10.693972  323285 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218_empty.pem, impossibly tiny 0 bytes
	I1029 09:11:10.693982  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem (1675 bytes)
	I1029 09:11:10.694016  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem (1078 bytes)
	I1029 09:11:10.694037  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:11:10.694058  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem (1679 bytes)
	I1029 09:11:10.694104  323285 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:11:10.694741  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:11:10.714478  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:11:10.733894  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:11:10.752731  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1029 09:11:10.771424  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1029 09:11:10.790531  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:11:10.809745  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:11:10.829770  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:11:10.848820  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:11:10.869632  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem --> /usr/share/ca-certificates/7218.pem (1338 bytes)
	I1029 09:11:10.888449  323285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /usr/share/ca-certificates/72182.pem (1708 bytes)
	I1029 09:11:10.906606  323285 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:11:10.920157  323285 ssh_runner.go:195] Run: openssl version
	I1029 09:11:10.926421  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:11:10.935727  323285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:11:10.940055  323285 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:11:10.940117  323285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:11:10.975298  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:11:10.984671  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7218.pem && ln -fs /usr/share/ca-certificates/7218.pem /etc/ssl/certs/7218.pem"
	I1029 09:11:10.994016  323285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7218.pem
	I1029 09:11:10.998049  323285 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:26 /usr/share/ca-certificates/7218.pem
	I1029 09:11:10.998109  323285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7218.pem
	I1029 09:11:11.032768  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7218.pem /etc/ssl/certs/51391683.0"
	I1029 09:11:11.042076  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72182.pem && ln -fs /usr/share/ca-certificates/72182.pem /etc/ssl/certs/72182.pem"
	I1029 09:11:11.051249  323285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72182.pem
	I1029 09:11:11.055496  323285 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:26 /usr/share/ca-certificates/72182.pem
	I1029 09:11:11.055557  323285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72182.pem
	I1029 09:11:11.090597  323285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72182.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:11:11.099729  323285 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:11:11.103802  323285 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1029 09:11:11.103864  323285 kubeadm.go:401] StartCluster: {Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:11:11.103946  323285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:11:11.104033  323285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:11:11.131290  323285 cri.go:89] found id: ""
	I1029 09:11:11.131346  323285 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:11:11.140423  323285 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1029 09:11:11.148741  323285 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1029 09:11:11.148798  323285 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1029 09:11:11.156810  323285 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1029 09:11:11.156826  323285 kubeadm.go:158] found existing configuration files:
	
	I1029 09:11:11.156874  323285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1029 09:11:11.164570  323285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1029 09:11:11.164623  323285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1029 09:11:11.172197  323285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1029 09:11:11.180475  323285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1029 09:11:11.180538  323285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1029 09:11:11.188729  323285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1029 09:11:11.197081  323285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1029 09:11:11.197134  323285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1029 09:11:11.205164  323285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1029 09:11:11.213757  323285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1029 09:11:11.213834  323285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1029 09:11:11.222560  323285 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1029 09:11:11.268456  323285 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1029 09:11:11.268507  323285 kubeadm.go:319] [preflight] Running pre-flight checks
	I1029 09:11:11.290199  323285 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1029 09:11:11.290297  323285 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1042-gcp
	I1029 09:11:11.290361  323285 kubeadm.go:319] OS: Linux
	I1029 09:11:11.290441  323285 kubeadm.go:319] CGROUPS_CPU: enabled
	I1029 09:11:11.290490  323285 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1029 09:11:11.290536  323285 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1029 09:11:11.290625  323285 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1029 09:11:11.290702  323285 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1029 09:11:11.290774  323285 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1029 09:11:11.290840  323285 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1029 09:11:11.290910  323285 kubeadm.go:319] CGROUPS_IO: enabled
	I1029 09:11:11.353151  323285 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1029 09:11:11.353280  323285 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1029 09:11:11.353455  323285 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1029 09:11:11.361607  323285 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1029 09:11:11.641711  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	W1029 09:11:14.140814  317625 pod_ready.go:104] pod "coredns-66bc5c9577-qtsxl" is not "Ready", error: <nil>
	I1029 09:11:11.363970  323285 out.go:252]   - Generating certificates and keys ...
	I1029 09:11:11.364100  323285 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1029 09:11:11.364205  323285 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1029 09:11:11.568728  323285 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1029 09:11:11.698854  323285 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1029 09:11:12.039747  323285 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1029 09:11:12.129625  323285 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1029 09:11:12.340599  323285 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1029 09:11:12.340797  323285 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-259430] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1029 09:11:12.447881  323285 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1029 09:11:12.448051  323285 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-259430] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1029 09:11:12.809139  323285 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1029 09:11:13.118618  323285 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 09:11:13.421858  323285 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 09:11:13.421937  323285 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 09:11:13.838287  323285 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 09:11:13.908409  323285 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1029 09:11:13.966840  323285 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 09:11:14.294658  323285 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 09:11:14.520651  323285 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 09:11:14.521473  323285 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 09:11:14.525440  323285 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 09:11:16.641154  317625 pod_ready.go:94] pod "coredns-66bc5c9577-qtsxl" is "Ready"
	I1029 09:11:16.641182  317625 pod_ready.go:86] duration metric: took 32.006267628s for pod "coredns-66bc5c9577-qtsxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.644109  317625 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.649400  317625 pod_ready.go:94] pod "etcd-default-k8s-diff-port-017274" is "Ready"
	I1029 09:11:16.649427  317625 pod_ready.go:86] duration metric: took 5.291908ms for pod "etcd-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.651669  317625 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.657129  317625 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-017274" is "Ready"
	I1029 09:11:16.657156  317625 pod_ready.go:86] duration metric: took 5.462345ms for pod "kube-apiserver-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.659534  317625 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:16.839252  317625 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-017274" is "Ready"
	I1029 09:11:16.839288  317625 pod_ready.go:86] duration metric: took 179.72875ms for pod "kube-controller-manager-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:17.038505  317625 pod_ready.go:83] waiting for pod "kube-proxy-82xcl" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:17.439109  317625 pod_ready.go:94] pod "kube-proxy-82xcl" is "Ready"
	I1029 09:11:17.439143  317625 pod_ready.go:86] duration metric: took 400.60463ms for pod "kube-proxy-82xcl" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:17.638686  317625 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:18.038057  317625 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-017274" is "Ready"
	I1029 09:11:18.038087  317625 pod_ready.go:86] duration metric: took 399.368296ms for pod "kube-scheduler-default-k8s-diff-port-017274" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:11:18.038104  317625 pod_ready.go:40] duration metric: took 33.407465789s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:11:18.083317  317625 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:11:18.085224  317625 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-017274" cluster and "default" namespace by default
	I1029 09:11:14.527215  323285 out.go:252]   - Booting up control plane ...
	I1029 09:11:14.527330  323285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 09:11:14.528001  323285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 09:11:14.529019  323285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 09:11:14.543280  323285 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 09:11:14.543401  323285 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1029 09:11:14.550630  323285 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1029 09:11:14.550841  323285 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 09:11:14.550884  323285 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 09:11:14.650739  323285 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1029 09:11:14.650905  323285 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1029 09:11:15.652524  323285 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001987708s
	I1029 09:11:15.655579  323285 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1029 09:11:15.655710  323285 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1029 09:11:15.655837  323285 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1029 09:11:15.655956  323285 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1029 09:11:16.826389  323285 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.170616379s
	I1029 09:11:17.564867  323285 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.909192382s
	I1029 09:11:19.659250  323285 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.003481114s
	I1029 09:11:19.671798  323285 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1029 09:11:19.684260  323285 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1029 09:11:19.699471  323285 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1029 09:11:19.699763  323285 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-259430 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1029 09:11:19.710393  323285 kubeadm.go:319] [bootstrap-token] Using token: etunao.909gsmlonyfps6an
	I1029 09:11:19.712233  323285 out.go:252]   - Configuring RBAC rules ...
	I1029 09:11:19.712362  323285 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1029 09:11:19.717094  323285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1029 09:11:19.726179  323285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1029 09:11:19.730162  323285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1029 09:11:19.733946  323285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1029 09:11:19.737821  323285 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1029 09:11:20.066141  323285 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1029 09:11:20.488826  323285 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1029 09:11:21.065711  323285 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1029 09:11:21.066649  323285 kubeadm.go:319] 
	I1029 09:11:21.066715  323285 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1029 09:11:21.066724  323285 kubeadm.go:319] 
	I1029 09:11:21.066789  323285 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1029 09:11:21.066796  323285 kubeadm.go:319] 
	I1029 09:11:21.066849  323285 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1029 09:11:21.066954  323285 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1029 09:11:21.067064  323285 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1029 09:11:21.067084  323285 kubeadm.go:319] 
	I1029 09:11:21.067165  323285 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1029 09:11:21.067184  323285 kubeadm.go:319] 
	I1029 09:11:21.067246  323285 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1029 09:11:21.067257  323285 kubeadm.go:319] 
	I1029 09:11:21.067324  323285 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1029 09:11:21.067491  323285 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1029 09:11:21.067595  323285 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1029 09:11:21.067606  323285 kubeadm.go:319] 
	I1029 09:11:21.067731  323285 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1029 09:11:21.067854  323285 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1029 09:11:21.067869  323285 kubeadm.go:319] 
	I1029 09:11:21.068015  323285 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token etunao.909gsmlonyfps6an \
	I1029 09:11:21.068175  323285 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac3e7481983f892dc0d8a54808eeb48169ef741e11f757d145550a40a55b8d23 \
	I1029 09:11:21.068206  323285 kubeadm.go:319] 	--control-plane 
	I1029 09:11:21.068228  323285 kubeadm.go:319] 
	I1029 09:11:21.068341  323285 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1029 09:11:21.068355  323285 kubeadm.go:319] 
	I1029 09:11:21.068471  323285 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token etunao.909gsmlonyfps6an \
	I1029 09:11:21.068560  323285 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ac3e7481983f892dc0d8a54808eeb48169ef741e11f757d145550a40a55b8d23 
	I1029 09:11:21.072046  323285 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1029 09:11:21.072153  323285 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1029 09:11:21.072178  323285 cni.go:84] Creating CNI manager for ""
	I1029 09:11:21.072201  323285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:11:21.075063  323285 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1029 09:11:21.076333  323285 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1029 09:11:21.080941  323285 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1029 09:11:21.080968  323285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1029 09:11:21.097427  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1029 09:11:21.329871  323285 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 09:11:21.329963  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:21.330018  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-259430 minikube.k8s.io/updated_at=2025_10_29T09_11_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac minikube.k8s.io/name=newest-cni-259430 minikube.k8s.io/primary=true
	I1029 09:11:21.340318  323285 ops.go:34] apiserver oom_adj: -16
	I1029 09:11:21.427541  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:21.927714  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:22.428593  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:22.928549  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:23.427854  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:23.927662  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:24.427970  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:24.927865  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:25.427901  323285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 09:11:25.501188  323285 kubeadm.go:1114] duration metric: took 4.171293414s to wait for elevateKubeSystemPrivileges
	I1029 09:11:25.501228  323285 kubeadm.go:403] duration metric: took 14.397367402s to StartCluster
	I1029 09:11:25.501250  323285 settings.go:142] acquiring lock: {Name:mk07eebd81bddcab3dc3d429be8b09770a1732f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:25.501330  323285 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:11:25.502295  323285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:25.502553  323285 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:11:25.502565  323285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1029 09:11:25.502588  323285 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:11:25.502688  323285 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-259430"
	I1029 09:11:25.502719  323285 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-259430"
	I1029 09:11:25.502730  323285 addons.go:70] Setting default-storageclass=true in profile "newest-cni-259430"
	I1029 09:11:25.502755  323285 host.go:66] Checking if "newest-cni-259430" exists ...
	I1029 09:11:25.502770  323285 config.go:182] Loaded profile config "newest-cni-259430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:11:25.502782  323285 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-259430"
	I1029 09:11:25.503251  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:25.503349  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:25.504494  323285 out.go:179] * Verifying Kubernetes components...
	I1029 09:11:25.505968  323285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:11:25.527485  323285 addons.go:239] Setting addon default-storageclass=true in "newest-cni-259430"
	I1029 09:11:25.527539  323285 host.go:66] Checking if "newest-cni-259430" exists ...
	I1029 09:11:25.527972  323285 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:25.530438  323285 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:11:25.531693  323285 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:11:25.531717  323285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:11:25.531789  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:25.560504  323285 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:11:25.560528  323285 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:11:25.560591  323285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:25.562044  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:25.586080  323285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:25.599802  323285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1029 09:11:25.657575  323285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:11:25.699467  323285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:11:25.704238  323285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:11:25.813033  323285 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1029 09:11:25.813984  323285 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:11:25.814061  323285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:11:26.026073  323285 api_server.go:72] duration metric: took 523.482951ms to wait for apiserver process to appear ...
	I1029 09:11:26.026104  323285 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:11:26.026125  323285 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:11:26.031704  323285 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:11:26.032574  323285 api_server.go:141] control plane version: v1.34.1
	I1029 09:11:26.032596  323285 api_server.go:131] duration metric: took 6.485257ms to wait for apiserver health ...
	I1029 09:11:26.032604  323285 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:11:26.034592  323285 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1029 09:11:26.035774  323285 addons.go:515] duration metric: took 533.185042ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1029 09:11:26.036425  323285 system_pods.go:59] 8 kube-system pods found
	I1029 09:11:26.036459  323285 system_pods.go:61] "coredns-66bc5c9577-k74f5" [d32eecf7-613f-43fe-87b6-1c56dc6f7837] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1029 09:11:26.036469  323285 system_pods.go:61] "etcd-newest-cni-259430" [21bef91b-1e23-4c0b-836a-7d38dbcd158d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:11:26.036477  323285 system_pods.go:61] "kindnet-4555c" [e9503ed8-3583-471b-8ed2-cb19fa55932f] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 09:11:26.036483  323285 system_pods.go:61] "kube-apiserver-newest-cni-259430" [e2aa2d83-bd57-4b42-9f74-cc369442fb48] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:11:26.036489  323285 system_pods.go:61] "kube-controller-manager-newest-cni-259430" [c8b1f927-8450-4b3d-8380-0d74388f7b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:11:26.036493  323285 system_pods.go:61] "kube-proxy-md8mn" [5b216c8f-e72c-44bd-ac4a-4f07213f90bb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:11:26.036499  323285 system_pods.go:61] "kube-scheduler-newest-cni-259430" [6dffb3f4-a5a2-456f-bfe4-34c2a0916645] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:11:26.036510  323285 system_pods.go:61] "storage-provisioner" [b614d976-a2b2-4dff-9276-58ac33de3f70] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1029 09:11:26.036517  323285 system_pods.go:74] duration metric: took 3.906841ms to wait for pod list to return data ...
	I1029 09:11:26.036528  323285 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:11:26.038941  323285 default_sa.go:45] found service account: "default"
	I1029 09:11:26.038970  323285 default_sa.go:55] duration metric: took 2.434992ms for default service account to be created ...
	I1029 09:11:26.038985  323285 kubeadm.go:587] duration metric: took 536.401056ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:11:26.039017  323285 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:11:26.041827  323285 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 09:11:26.041856  323285 node_conditions.go:123] node cpu capacity is 8
	I1029 09:11:26.041871  323285 node_conditions.go:105] duration metric: took 2.848114ms to run NodePressure ...
	I1029 09:11:26.041886  323285 start.go:242] waiting for startup goroutines ...
	I1029 09:11:26.317778  323285 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-259430" context rescaled to 1 replicas
	I1029 09:11:26.317823  323285 start.go:247] waiting for cluster config update ...
	I1029 09:11:26.317834  323285 start.go:256] writing updated cluster config ...
	I1029 09:11:26.318152  323285 ssh_runner.go:195] Run: rm -f paused
	I1029 09:11:26.372618  323285 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:11:26.375584  323285 out.go:179] * Done! kubectl is now configured to use "newest-cni-259430" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 09:11:04 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:04.003300044Z" level=info msg="Started container" PID=1753 containerID=049423ca1d20bd76dbb36190f350135bc9eb94e58ed272ac48e6a5917c1c95a3 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6/dashboard-metrics-scraper id=9db337fe-0d37-41ab-88f3-d72f3d806820 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b1ac86f52b56ec20cc039775ffbcdee17c0f811c4eb3063f25155c595e4012c2
	Oct 29 09:11:04 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:04.940701844Z" level=info msg="Removing container: c6f493b8847c1c14252e2b9ba73d6a88203f11bcd94f6a0b288ce745ad8b4663" id=a85184eb-4171-43a9-bc2e-ec2ae498c9c1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:11:05 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:05.240837264Z" level=info msg="Removed container c6f493b8847c1c14252e2b9ba73d6a88203f11bcd94f6a0b288ce745ad8b4663: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6/dashboard-metrics-scraper" id=a85184eb-4171-43a9-bc2e-ec2ae498c9c1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.94516126Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=51b55299-9a9d-4217-9eab-2f837f2912e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.946161451Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2f3fe5fa-0d0a-4f8c-842e-3d2982c3e0d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.950100941Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=11bb5538-8678-42c2-b6de-4c613719e06e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.950258702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.955150563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.955368423Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/26b1dc5bf57b6834ad9deebb6ad353190ac3b4fbabeefacdd218f37dd8fcf10a/merged/etc/passwd: no such file or directory"
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.955417027Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/26b1dc5bf57b6834ad9deebb6ad353190ac3b4fbabeefacdd218f37dd8fcf10a/merged/etc/group: no such file or directory"
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.956249429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.982151997Z" level=info msg="Created container 9f8b022f6197b575996b89e676302e4eb86553b72ca4f45a653082725e761546: kube-system/storage-provisioner/storage-provisioner" id=11bb5538-8678-42c2-b6de-4c613719e06e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.982826169Z" level=info msg="Starting container: 9f8b022f6197b575996b89e676302e4eb86553b72ca4f45a653082725e761546" id=6bb552ca-feef-4b85-a5d0-b9479171ba47 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:11:14 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:14.984656556Z" level=info msg="Started container" PID=1767 containerID=9f8b022f6197b575996b89e676302e4eb86553b72ca4f45a653082725e761546 description=kube-system/storage-provisioner/storage-provisioner id=6bb552ca-feef-4b85-a5d0-b9479171ba47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a05601b284ddfb76e8e985c3ddaa9cefd747d00587eb271e46cec2ef1dc2c3cc
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.824390987Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5ed37546-0650-4f97-8f40-5480b374d777 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.825422197Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cf74c26b-8bbc-4ae9-99dc-3109272d1aed name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.82652955Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6/dashboard-metrics-scraper" id=28d0f4f8-e8a3-40bc-90e7-d13ec63d1cf7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.82667634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.832569919Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.833094519Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.862845477Z" level=info msg="Created container f18669e29ce3bae89e47c4d615e489b67abd011cf8fb575159bfbd7cf320ddf3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6/dashboard-metrics-scraper" id=28d0f4f8-e8a3-40bc-90e7-d13ec63d1cf7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.86357276Z" level=info msg="Starting container: f18669e29ce3bae89e47c4d615e489b67abd011cf8fb575159bfbd7cf320ddf3" id=4e74fc90-a0d1-49b9-b051-8dfc301f74c3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.865447279Z" level=info msg="Started container" PID=1803 containerID=f18669e29ce3bae89e47c4d615e489b67abd011cf8fb575159bfbd7cf320ddf3 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6/dashboard-metrics-scraper id=4e74fc90-a0d1-49b9-b051-8dfc301f74c3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b1ac86f52b56ec20cc039775ffbcdee17c0f811c4eb3063f25155c595e4012c2
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.983473062Z" level=info msg="Removing container: 049423ca1d20bd76dbb36190f350135bc9eb94e58ed272ac48e6a5917c1c95a3" id=78f9277f-3c4f-43c5-b476-3f8626ac0383 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 29 09:11:26 default-k8s-diff-port-017274 crio[565]: time="2025-10-29T09:11:26.995468867Z" level=info msg="Removed container 049423ca1d20bd76dbb36190f350135bc9eb94e58ed272ac48e6a5917c1c95a3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6/dashboard-metrics-scraper" id=78f9277f-3c4f-43c5-b476-3f8626ac0383 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	f18669e29ce3b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   3                   b1ac86f52b56e       dashboard-metrics-scraper-6ffb444bf9-qgpb6             kubernetes-dashboard
	9f8b022f6197b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   a05601b284ddf       storage-provisioner                                    kube-system
	6a233da0986cd       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   b0550839b9aae       kubernetes-dashboard-855c9754f9-4kfgv                  kubernetes-dashboard
	ba3638e3a9f2a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   ce5eb26bee0cf       busybox                                                default
	b307f64c120f3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   34f269a75a0e4       coredns-66bc5c9577-qtsxl                               kube-system
	622a00b140b2c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   58d06db2a0823       kube-proxy-82xcl                                       kube-system
	6f632ec2ab17f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   a05601b284ddf       storage-provisioner                                    kube-system
	4ce19f536b3e7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   6657d88a668c3       kindnet-tdtxm                                          kube-system
	7e6fae9cd623c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   b7121fee26dff       kube-scheduler-default-k8s-diff-port-017274            kube-system
	f86c6058a7094       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   59c6e6e7a6b49       kube-apiserver-default-k8s-diff-port-017274            kube-system
	16de8e1e0e29b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   d80c61951efb3       kube-controller-manager-default-k8s-diff-port-017274   kube-system
	bf3d3afb886dc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   dff9b51d4187a       etcd-default-k8s-diff-port-017274                      kube-system
	
	
	==> coredns [b307f64c120f3819158edb444be7e97b99be83763b6b415d2244b39fe00046f9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53623 - 50449 "HINFO IN 4280449570087446041.7047790352421022031. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.082582763s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-017274
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-017274
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=default-k8s-diff-port-017274
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_09_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:09:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-017274
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:11:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:11:14 +0000   Wed, 29 Oct 2025 09:09:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:11:14 +0000   Wed, 29 Oct 2025 09:09:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:11:14 +0000   Wed, 29 Oct 2025 09:09:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:11:14 +0000   Wed, 29 Oct 2025 09:10:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-017274
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                c5ea9dce-72e7-4834-9b46-0ce5130939cc
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-qtsxl                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-default-k8s-diff-port-017274                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-tdtxm                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-default-k8s-diff-port-017274             250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-017274    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-82xcl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-default-k8s-diff-port-017274             100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qgpb6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4kfgv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  113s (x8 over 113s)  kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x8 over 113s)  kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x8 over 113s)  kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     108s                 kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  108s                 kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s                 kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node default-k8s-diff-port-017274 event: Registered Node default-k8s-diff-port-017274 in Controller
	  Normal  NodeReady                91s                  kubelet          Node default-k8s-diff-port-017274 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node default-k8s-diff-port-017274 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                  node-controller  Node default-k8s-diff-port-017274 event: Registered Node default-k8s-diff-port-017274 in Controller
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[  +7.860471] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea d1 bc 55 fa d3 08 06
	[  +0.057230] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[  +7.379065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 86 de 12 5c b8 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[Oct29 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 3e ab f0 10 3c 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 16 ea 00 8c 1d 08 06
	[  +4.650960] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 a3 db 56 0e bd 08 06
	[  +0.000357] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[ +10.158654] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	[Oct29 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 62 44 12 a2 6e 08 06
	[  +0.000472] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	
	
	==> etcd [bf3d3afb886dcc98b83711dba516b774e5c1d649904cdd75ab5a786f4f65ac21] <==
	{"level":"warn","ts":"2025-10-29T09:10:42.365615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.373112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.379898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.386720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.394168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.402568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.411150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.419941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.430877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.438041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.445152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.452918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.460252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.467084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.474124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.481897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.490112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.513354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.527631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:10:42.574641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58422","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-29T09:11:03.615459Z","caller":"traceutil/trace.go:172","msg":"trace[365937369] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"110.997339ms","start":"2025-10-29T09:11:03.504432Z","end":"2025-10-29T09:11:03.615429Z","steps":["trace[365937369] 'process raft request'  (duration: 110.853397ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T09:11:03.862585Z","caller":"traceutil/trace.go:172","msg":"trace[245654951] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"115.223476ms","start":"2025-10-29T09:11:03.747341Z","end":"2025-10-29T09:11:03.862565Z","steps":["trace[245654951] 'process raft request'  (duration: 115.038389ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T09:11:04.038446Z","caller":"traceutil/trace.go:172","msg":"trace[1934644612] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"110.053765ms","start":"2025-10-29T09:11:03.928370Z","end":"2025-10-29T09:11:04.038423Z","steps":["trace[1934644612] 'process raft request'  (duration: 109.772472ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T09:11:04.203222Z","caller":"traceutil/trace.go:172","msg":"trace[1534120381] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"106.659062ms","start":"2025-10-29T09:11:04.096536Z","end":"2025-10-29T09:11:04.203195Z","steps":["trace[1534120381] 'process raft request'  (duration: 93.620768ms)","trace[1534120381] 'compare'  (duration: 12.900719ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-29T09:11:04.535749Z","caller":"traceutil/trace.go:172","msg":"trace[1075526035] transaction","detail":"{read_only:false; response_revision:642; number_of_response:1; }","duration":"112.97477ms","start":"2025-10-29T09:11:04.422654Z","end":"2025-10-29T09:11:04.535629Z","steps":["trace[1075526035] 'process raft request'  (duration: 112.766353ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:11:35 up 54 min,  0 user,  load average: 4.25, 4.13, 2.69
	Linux default-k8s-diff-port-017274 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4ce19f536b3e79a00539eda45389baa388b6e72af2f5f3735054624a5e24cc23] <==
	I1029 09:10:44.450781       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:10:44.451043       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1029 09:10:44.548076       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:10:44.548105       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:10:44.548127       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:10:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:10:44.708798       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:10:44.708977       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:10:44.709010       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:10:44.709254       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:10:45.048128       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:10:45.048156       1 metrics.go:72] Registering metrics
	I1029 09:10:45.048234       1 controller.go:711] "Syncing nftables rules"
	I1029 09:10:54.709097       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1029 09:10:54.709155       1 main.go:301] handling current node
	I1029 09:11:04.709384       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1029 09:11:04.709434       1 main.go:301] handling current node
	I1029 09:11:14.709490       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1029 09:11:14.709550       1 main.go:301] handling current node
	I1029 09:11:24.709960       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1029 09:11:24.710036       1 main.go:301] handling current node
	I1029 09:11:34.715090       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1029 09:11:34.715135       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f86c6058a709440c09ee461898fae3daf70e692c424c5d7e8f093887f7ac3e6e] <==
	I1029 09:10:43.099675       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 09:10:43.099799       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1029 09:10:43.099846       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1029 09:10:43.100357       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:10:43.100372       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:10:43.100379       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:10:43.100386       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:10:43.105032       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1029 09:10:43.105943       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 09:10:43.121927       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1029 09:10:43.127671       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:10:43.133284       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1029 09:10:43.133327       1 policy_source.go:240] refreshing policies
	I1029 09:10:43.220896       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:10:43.386884       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:10:43.419081       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:10:43.445097       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:10:43.455720       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:10:43.464455       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:10:43.506614       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.32.103"}
	I1029 09:10:43.519128       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.107.50"}
	I1029 09:10:44.001501       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:10:46.097827       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:10:46.149468       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:10:46.348708       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [16de8e1e0e29b4272b860675eb3990e121068d5daeaec00a854feb51ab6b59c6] <==
	I1029 09:10:45.709582       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 09:10:45.710586       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 09:10:45.710632       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 09:10:45.713831       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:10:45.716149       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1029 09:10:45.744717       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 09:10:45.744746       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:10:45.744717       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:10:45.744744       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1029 09:10:45.744846       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1029 09:10:45.745049       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1029 09:10:45.745452       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1029 09:10:45.746724       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 09:10:45.746813       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 09:10:45.746945       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-017274"
	I1029 09:10:45.747013       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1029 09:10:45.749017       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 09:10:45.749477       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:10:45.750806       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:10:45.753034       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 09:10:45.755977       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:10:45.758423       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:10:45.760374       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:10:45.762947       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 09:10:45.770345       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [622a00b140b2c0609cf8cf6561c828e18ba9000776cbc4c975473747329412e8] <==
	I1029 09:10:44.261605       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:10:44.332167       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:10:44.433216       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:10:44.433262       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1029 09:10:44.433380       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:10:44.453690       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:10:44.453748       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:10:44.459007       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:10:44.459398       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:10:44.459428       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:10:44.460747       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:10:44.460780       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:10:44.460776       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:10:44.460800       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:10:44.460831       1 config.go:200] "Starting service config controller"
	I1029 09:10:44.460838       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:10:44.460847       1 config.go:309] "Starting node config controller"
	I1029 09:10:44.460866       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:10:44.460875       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:10:44.561149       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:10:44.561235       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:10:44.561262       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7e6fae9cd623cd88656a304b8514161f51b751e23f1918df0f51d122620ec416] <==
	I1029 09:10:41.760848       1 serving.go:386] Generated self-signed cert in-memory
	W1029 09:10:43.017360       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:10:43.017396       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:10:43.017408       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:10:43.017417       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:10:43.074051       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:10:43.074097       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:10:43.078059       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:10:43.078545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:10:43.078897       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:10:43.080093       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:10:43.181047       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:10:46 default-k8s-diff-port-017274 kubelet[721]: I1029 09:10:46.372217     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsqd8\" (UniqueName: \"kubernetes.io/projected/0df4a7f6-44d6-434b-b4a6-15ecc6298dc6-kube-api-access-fsqd8\") pod \"dashboard-metrics-scraper-6ffb444bf9-qgpb6\" (UID: \"0df4a7f6-44d6-434b-b4a6-15ecc6298dc6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6"
	Oct 29 09:10:46 default-k8s-diff-port-017274 kubelet[721]: I1029 09:10:46.372243     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0df4a7f6-44d6-434b-b4a6-15ecc6298dc6-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-qgpb6\" (UID: \"0df4a7f6-44d6-434b-b4a6-15ecc6298dc6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6"
	Oct 29 09:10:46 default-k8s-diff-port-017274 kubelet[721]: I1029 09:10:46.372322     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1a5c8fb6-1c63-42d2-8b52-de30e9a56c2c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-4kfgv\" (UID: \"1a5c8fb6-1c63-42d2-8b52-de30e9a56c2c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4kfgv"
	Oct 29 09:10:52 default-k8s-diff-port-017274 kubelet[721]: I1029 09:10:52.877763     721 scope.go:117] "RemoveContainer" containerID="540f863b4d80ce46e133380f1883aeb49643664d9b2fe555fc7a9f2911a9db40"
	Oct 29 09:10:52 default-k8s-diff-port-017274 kubelet[721]: I1029 09:10:52.892123     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4kfgv" podStartSLOduration=3.661496395 podStartE2EDuration="6.892099995s" podCreationTimestamp="2025-10-29 09:10:46 +0000 UTC" firstStartedPulling="2025-10-29 09:10:46.602408428 +0000 UTC m=+5.875219143" lastFinishedPulling="2025-10-29 09:10:49.833012028 +0000 UTC m=+9.105822743" observedRunningTime="2025-10-29 09:10:50.885411077 +0000 UTC m=+10.158221799" watchObservedRunningTime="2025-10-29 09:10:52.892099995 +0000 UTC m=+12.164910721"
	Oct 29 09:10:53 default-k8s-diff-port-017274 kubelet[721]: I1029 09:10:53.882473     721 scope.go:117] "RemoveContainer" containerID="540f863b4d80ce46e133380f1883aeb49643664d9b2fe555fc7a9f2911a9db40"
	Oct 29 09:10:53 default-k8s-diff-port-017274 kubelet[721]: I1029 09:10:53.882635     721 scope.go:117] "RemoveContainer" containerID="c6f493b8847c1c14252e2b9ba73d6a88203f11bcd94f6a0b288ce745ad8b4663"
	Oct 29 09:10:53 default-k8s-diff-port-017274 kubelet[721]: E1029 09:10:53.882854     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qgpb6_kubernetes-dashboard(0df4a7f6-44d6-434b-b4a6-15ecc6298dc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6" podUID="0df4a7f6-44d6-434b-b4a6-15ecc6298dc6"
	Oct 29 09:10:54 default-k8s-diff-port-017274 kubelet[721]: I1029 09:10:54.887387     721 scope.go:117] "RemoveContainer" containerID="c6f493b8847c1c14252e2b9ba73d6a88203f11bcd94f6a0b288ce745ad8b4663"
	Oct 29 09:10:54 default-k8s-diff-port-017274 kubelet[721]: E1029 09:10:54.887607     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qgpb6_kubernetes-dashboard(0df4a7f6-44d6-434b-b4a6-15ecc6298dc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6" podUID="0df4a7f6-44d6-434b-b4a6-15ecc6298dc6"
	Oct 29 09:11:03 default-k8s-diff-port-017274 kubelet[721]: I1029 09:11:03.497121     721 scope.go:117] "RemoveContainer" containerID="c6f493b8847c1c14252e2b9ba73d6a88203f11bcd94f6a0b288ce745ad8b4663"
	Oct 29 09:11:04 default-k8s-diff-port-017274 kubelet[721]: I1029 09:11:04.914869     721 scope.go:117] "RemoveContainer" containerID="c6f493b8847c1c14252e2b9ba73d6a88203f11bcd94f6a0b288ce745ad8b4663"
	Oct 29 09:11:04 default-k8s-diff-port-017274 kubelet[721]: I1029 09:11:04.915096     721 scope.go:117] "RemoveContainer" containerID="049423ca1d20bd76dbb36190f350135bc9eb94e58ed272ac48e6a5917c1c95a3"
	Oct 29 09:11:04 default-k8s-diff-port-017274 kubelet[721]: E1029 09:11:04.915333     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qgpb6_kubernetes-dashboard(0df4a7f6-44d6-434b-b4a6-15ecc6298dc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6" podUID="0df4a7f6-44d6-434b-b4a6-15ecc6298dc6"
	Oct 29 09:11:13 default-k8s-diff-port-017274 kubelet[721]: I1029 09:11:13.496921     721 scope.go:117] "RemoveContainer" containerID="049423ca1d20bd76dbb36190f350135bc9eb94e58ed272ac48e6a5917c1c95a3"
	Oct 29 09:11:13 default-k8s-diff-port-017274 kubelet[721]: E1029 09:11:13.497116     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qgpb6_kubernetes-dashboard(0df4a7f6-44d6-434b-b4a6-15ecc6298dc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6" podUID="0df4a7f6-44d6-434b-b4a6-15ecc6298dc6"
	Oct 29 09:11:14 default-k8s-diff-port-017274 kubelet[721]: I1029 09:11:14.944727     721 scope.go:117] "RemoveContainer" containerID="6f632ec2ab17f16383342be9b493a5028653a719ddcd23d6ebf0bf9ef6d6ada4"
	Oct 29 09:11:26 default-k8s-diff-port-017274 kubelet[721]: I1029 09:11:26.823798     721 scope.go:117] "RemoveContainer" containerID="049423ca1d20bd76dbb36190f350135bc9eb94e58ed272ac48e6a5917c1c95a3"
	Oct 29 09:11:26 default-k8s-diff-port-017274 kubelet[721]: I1029 09:11:26.981969     721 scope.go:117] "RemoveContainer" containerID="049423ca1d20bd76dbb36190f350135bc9eb94e58ed272ac48e6a5917c1c95a3"
	Oct 29 09:11:26 default-k8s-diff-port-017274 kubelet[721]: I1029 09:11:26.982273     721 scope.go:117] "RemoveContainer" containerID="f18669e29ce3bae89e47c4d615e489b67abd011cf8fb575159bfbd7cf320ddf3"
	Oct 29 09:11:26 default-k8s-diff-port-017274 kubelet[721]: E1029 09:11:26.982510     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qgpb6_kubernetes-dashboard(0df4a7f6-44d6-434b-b4a6-15ecc6298dc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpb6" podUID="0df4a7f6-44d6-434b-b4a6-15ecc6298dc6"
	Oct 29 09:11:30 default-k8s-diff-port-017274 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:11:30 default-k8s-diff-port-017274 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:11:30 default-k8s-diff-port-017274 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 29 09:11:30 default-k8s-diff-port-017274 systemd[1]: kubelet.service: Consumed 1.837s CPU time.
	
	
	==> kubernetes-dashboard [6a233da0986cdd4b355e6ad5ef2ef59ef1cc2366e325cd151f90f0e07579e1d5] <==
	2025/10/29 09:10:49 Starting overwatch
	2025/10/29 09:10:49 Using namespace: kubernetes-dashboard
	2025/10/29 09:10:49 Using in-cluster config to connect to apiserver
	2025/10/29 09:10:49 Using secret token for csrf signing
	2025/10/29 09:10:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/29 09:10:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/29 09:10:49 Successful initial request to the apiserver, version: v1.34.1
	2025/10/29 09:10:49 Generating JWE encryption key
	2025/10/29 09:10:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/29 09:10:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/29 09:10:50 Initializing JWE encryption key from synchronized object
	2025/10/29 09:10:50 Creating in-cluster Sidecar client
	2025/10/29 09:10:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/29 09:10:50 Serving insecurely on HTTP port: 9090
	2025/10/29 09:11:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6f632ec2ab17f16383342be9b493a5028653a719ddcd23d6ebf0bf9ef6d6ada4] <==
	I1029 09:10:44.220947       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1029 09:11:14.223711       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9f8b022f6197b575996b89e676302e4eb86553b72ca4f45a653082725e761546] <==
	I1029 09:11:14.997381       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1029 09:11:15.004746       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1029 09:11:15.004798       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1029 09:11:15.007362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:18.462725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:22.723858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:26.322540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:29.376598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:32.399349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:32.405470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:11:32.405680       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1029 09:11:32.406228       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc8dd318-9670-4d4d-99bd-9ed78324108f", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-017274_71b3aaf8-a893-4acc-810a-439b6056f8f3 became leader
	I1029 09:11:32.406356       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-017274_71b3aaf8-a893-4acc-810a-439b6056f8f3!
	W1029 09:11:32.412297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:32.417582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1029 09:11:32.506945       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-017274_71b3aaf8-a893-4acc-810a-439b6056f8f3!
	W1029 09:11:34.420629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 09:11:34.426578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-017274 -n default-k8s-diff-port-017274
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-017274 -n default-k8s-diff-port-017274: exit status 2 (357.808701ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-017274 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-259430 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-259430 --alsologtostderr -v=1: exit status 80 (2.360608459s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-259430 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:11:47.489719  334923 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:11:47.490036  334923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:11:47.490047  334923 out.go:374] Setting ErrFile to fd 2...
	I1029 09:11:47.490053  334923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:11:47.490291  334923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:11:47.490573  334923 out.go:368] Setting JSON to false
	I1029 09:11:47.490630  334923 mustload.go:66] Loading cluster: newest-cni-259430
	I1029 09:11:47.491064  334923 config.go:182] Loaded profile config "newest-cni-259430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:11:47.491498  334923 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:47.513595  334923 host.go:66] Checking if "newest-cni-259430" exists ...
	I1029 09:11:47.513979  334923 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:11:47.577929  334923 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-29 09:11:47.567932053 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:11:47.578593  334923 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-259430 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1029 09:11:47.580914  334923 out.go:179] * Pausing node newest-cni-259430 ... 
	I1029 09:11:47.582022  334923 host.go:66] Checking if "newest-cni-259430" exists ...
	I1029 09:11:47.582316  334923 ssh_runner.go:195] Run: systemctl --version
	I1029 09:11:47.582356  334923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:47.602214  334923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:47.703121  334923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:11:47.717768  334923 pause.go:52] kubelet running: true
	I1029 09:11:47.717856  334923 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:11:47.853928  334923 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:11:47.854072  334923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:11:47.921684  334923 cri.go:89] found id: "9379d9fd0f7b6aca31b8d7be1960ce0c4a30429b454b9e473233044fb3e049fe"
	I1029 09:11:47.921706  334923 cri.go:89] found id: "ed1be3dde08f42b430597b1029ed66daa0d8f54f49214564bd4e8923ad921eea"
	I1029 09:11:47.921710  334923 cri.go:89] found id: "f3e3a0ed6603e3856e3d00a3ba9ea0a088ec7378c1ab94c9e4092df6c8e5ce5c"
	I1029 09:11:47.921714  334923 cri.go:89] found id: "4ab2230f580dbca69681a5d9a921b219bd4d7cef2d8ececb23fbd25a060866df"
	I1029 09:11:47.921716  334923 cri.go:89] found id: "74cc9b0ba8d30a66bc95714a4b556650efaf537c941eca3307d1d9e5161661b1"
	I1029 09:11:47.921719  334923 cri.go:89] found id: "d9d755902ee309db827c42914c9d11cc00e2c96ca199f264674c43e17f1a58bb"
	I1029 09:11:47.921722  334923 cri.go:89] found id: ""
	I1029 09:11:47.921770  334923 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:11:47.933754  334923 retry.go:31] will retry after 296.944751ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:11:47Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:11:48.231348  334923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:11:48.244730  334923 pause.go:52] kubelet running: false
	I1029 09:11:48.244796  334923 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:11:48.353737  334923 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:11:48.353808  334923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:11:48.424451  334923 cri.go:89] found id: "9379d9fd0f7b6aca31b8d7be1960ce0c4a30429b454b9e473233044fb3e049fe"
	I1029 09:11:48.424477  334923 cri.go:89] found id: "ed1be3dde08f42b430597b1029ed66daa0d8f54f49214564bd4e8923ad921eea"
	I1029 09:11:48.424481  334923 cri.go:89] found id: "f3e3a0ed6603e3856e3d00a3ba9ea0a088ec7378c1ab94c9e4092df6c8e5ce5c"
	I1029 09:11:48.424484  334923 cri.go:89] found id: "4ab2230f580dbca69681a5d9a921b219bd4d7cef2d8ececb23fbd25a060866df"
	I1029 09:11:48.424487  334923 cri.go:89] found id: "74cc9b0ba8d30a66bc95714a4b556650efaf537c941eca3307d1d9e5161661b1"
	I1029 09:11:48.424490  334923 cri.go:89] found id: "d9d755902ee309db827c42914c9d11cc00e2c96ca199f264674c43e17f1a58bb"
	I1029 09:11:48.424492  334923 cri.go:89] found id: ""
	I1029 09:11:48.424557  334923 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:11:48.436318  334923 retry.go:31] will retry after 251.325186ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:11:48Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:11:48.687789  334923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:11:48.701392  334923 pause.go:52] kubelet running: false
	I1029 09:11:48.701476  334923 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:11:48.834573  334923 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:11:48.834653  334923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:11:48.903226  334923 cri.go:89] found id: "9379d9fd0f7b6aca31b8d7be1960ce0c4a30429b454b9e473233044fb3e049fe"
	I1029 09:11:48.903248  334923 cri.go:89] found id: "ed1be3dde08f42b430597b1029ed66daa0d8f54f49214564bd4e8923ad921eea"
	I1029 09:11:48.903252  334923 cri.go:89] found id: "f3e3a0ed6603e3856e3d00a3ba9ea0a088ec7378c1ab94c9e4092df6c8e5ce5c"
	I1029 09:11:48.903256  334923 cri.go:89] found id: "4ab2230f580dbca69681a5d9a921b219bd4d7cef2d8ececb23fbd25a060866df"
	I1029 09:11:48.903258  334923 cri.go:89] found id: "74cc9b0ba8d30a66bc95714a4b556650efaf537c941eca3307d1d9e5161661b1"
	I1029 09:11:48.903268  334923 cri.go:89] found id: "d9d755902ee309db827c42914c9d11cc00e2c96ca199f264674c43e17f1a58bb"
	I1029 09:11:48.903271  334923 cri.go:89] found id: ""
	I1029 09:11:48.903322  334923 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:11:48.915770  334923 retry.go:31] will retry after 644.502043ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:11:48Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:11:49.560656  334923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:11:49.574480  334923 pause.go:52] kubelet running: false
	I1029 09:11:49.574548  334923 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1029 09:11:49.688405  334923 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1029 09:11:49.688510  334923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1029 09:11:49.756742  334923 cri.go:89] found id: "9379d9fd0f7b6aca31b8d7be1960ce0c4a30429b454b9e473233044fb3e049fe"
	I1029 09:11:49.756763  334923 cri.go:89] found id: "ed1be3dde08f42b430597b1029ed66daa0d8f54f49214564bd4e8923ad921eea"
	I1029 09:11:49.756767  334923 cri.go:89] found id: "f3e3a0ed6603e3856e3d00a3ba9ea0a088ec7378c1ab94c9e4092df6c8e5ce5c"
	I1029 09:11:49.756771  334923 cri.go:89] found id: "4ab2230f580dbca69681a5d9a921b219bd4d7cef2d8ececb23fbd25a060866df"
	I1029 09:11:49.756781  334923 cri.go:89] found id: "74cc9b0ba8d30a66bc95714a4b556650efaf537c941eca3307d1d9e5161661b1"
	I1029 09:11:49.756784  334923 cri.go:89] found id: "d9d755902ee309db827c42914c9d11cc00e2c96ca199f264674c43e17f1a58bb"
	I1029 09:11:49.756787  334923 cri.go:89] found id: ""
	I1029 09:11:49.756838  334923 ssh_runner.go:195] Run: sudo runc list -f json
	I1029 09:11:49.770945  334923 out.go:203] 
	W1029 09:11:49.772253  334923 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:11:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:11:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1029 09:11:49.772270  334923 out.go:285] * 
	* 
	W1029 09:11:49.776512  334923 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1029 09:11:49.777620  334923 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-259430 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-259430
helpers_test.go:243: (dbg) docker inspect newest-cni-259430:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb",
	        "Created": "2025-10-29T09:11:05.338331033Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 332868,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:11:36.999957877Z",
	            "FinishedAt": "2025-10-29T09:11:36.114284506Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb/hostname",
	        "HostsPath": "/var/lib/docker/containers/898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb/hosts",
	        "LogPath": "/var/lib/docker/containers/898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb/898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb-json.log",
	        "Name": "/newest-cni-259430",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-259430:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-259430",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb",
	                "LowerDir": "/var/lib/docker/overlay2/d4dfbd09fa5e0cf3f5f114acf8641b739db6281f40165e806f5f59b8b1f6d1fb-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d4dfbd09fa5e0cf3f5f114acf8641b739db6281f40165e806f5f59b8b1f6d1fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d4dfbd09fa5e0cf3f5f114acf8641b739db6281f40165e806f5f59b8b1f6d1fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d4dfbd09fa5e0cf3f5f114acf8641b739db6281f40165e806f5f59b8b1f6d1fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-259430",
	                "Source": "/var/lib/docker/volumes/newest-cni-259430/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-259430",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-259430",
	                "name.minikube.sigs.k8s.io": "newest-cni-259430",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "621d55fd358f7a63a391e5675bf5022ea6541694211944060b97c67a8f5e9041",
	            "SandboxKey": "/var/run/docker/netns/621d55fd358f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-259430": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:ed:04:22:05:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "52c784c79ded45742c986e67d511ad367789016db5dda3c8e7a6f446705f967c",
	                    "EndpointID": "23f77a7b1d7cfb5043ba09aa2e433ab95e1c40d42f1e03d727a3cce0fde70d07",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-259430",
	                        "898af032bdf9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-259430 -n newest-cni-259430
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-259430 -n newest-cni-259430: exit status 2 (329.601315ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-259430 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-017274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:11 UTC │
	│ image   │ old-k8s-version-096492 image list --format=json                                                                                                                                                                                               │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p old-k8s-version-096492 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ image   │ embed-certs-834228 image list --format=json                                                                                                                                                                                                   │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p embed-certs-834228 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ image   │ no-preload-043790 image list --format=json                                                                                                                                                                                                    │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p no-preload-043790 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p newest-cni-259430 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p embed-certs-834228                                                                                                                                                                                                                         │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p no-preload-043790                                                                                                                                                                                                                          │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p embed-certs-834228                                                                                                                                                                                                                         │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p no-preload-043790                                                                                                                                                                                                                          │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ addons  │ enable metrics-server -p newest-cni-259430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │                     │
	│ stop    │ -p newest-cni-259430 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ image   │ default-k8s-diff-port-017274 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ pause   │ -p default-k8s-diff-port-017274 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-017274                                                                                                                                                                                                               │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ addons  │ enable dashboard -p newest-cni-259430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ start   │ -p newest-cni-259430 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p default-k8s-diff-port-017274                                                                                                                                                                                                               │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ image   │ newest-cni-259430 image list --format=json                                                                                                                                                                                                    │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ pause   │ -p newest-cni-259430 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:11:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:11:36.756662  332670 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:11:36.756934  332670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:11:36.756942  332670 out.go:374] Setting ErrFile to fd 2...
	I1029 09:11:36.756947  332670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:11:36.757183  332670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:11:36.757626  332670 out.go:368] Setting JSON to false
	I1029 09:11:36.758720  332670 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3245,"bootTime":1761725852,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:11:36.758808  332670 start.go:143] virtualization: kvm guest
	I1029 09:11:36.760726  332670 out.go:179] * [newest-cni-259430] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:11:36.762031  332670 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:11:36.762042  332670 notify.go:221] Checking for updates...
	I1029 09:11:36.764458  332670 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:11:36.765702  332670 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:11:36.770278  332670 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:11:36.771737  332670 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:11:36.773013  332670 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:11:36.774824  332670 config.go:182] Loaded profile config "newest-cni-259430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:11:36.775540  332670 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:11:36.801801  332670 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:11:36.801948  332670 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:11:36.864465  332670 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-29 09:11:36.854076219 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:11:36.864577  332670 docker.go:319] overlay module found
	I1029 09:11:36.866450  332670 out.go:179] * Using the docker driver based on existing profile
	I1029 09:11:36.867623  332670 start.go:309] selected driver: docker
	I1029 09:11:36.867643  332670 start.go:930] validating driver "docker" against &{Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:11:36.867749  332670 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:11:36.868376  332670 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:11:36.926679  332670 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-29 09:11:36.916086544 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:11:36.927013  332670 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:11:36.927090  332670 cni.go:84] Creating CNI manager for ""
	I1029 09:11:36.927162  332670 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:11:36.927250  332670 start.go:353] cluster config:
	{Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:11:36.929052  332670 out.go:179] * Starting "newest-cni-259430" primary control-plane node in "newest-cni-259430" cluster
	I1029 09:11:36.930119  332670 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:11:36.931204  332670 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:11:36.932310  332670 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:11:36.932335  332670 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:11:36.932353  332670 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:11:36.932362  332670 cache.go:59] Caching tarball of preloaded images
	I1029 09:11:36.932459  332670 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:11:36.932483  332670 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:11:36.932615  332670 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json ...
	I1029 09:11:36.953333  332670 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:11:36.953357  332670 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:11:36.953377  332670 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:11:36.953403  332670 start.go:360] acquireMachinesLock for newest-cni-259430: {Name:mk9f7a4924e0dc30dd9007c8d213cb8c4076ee8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:11:36.953471  332670 start.go:364] duration metric: took 45.255µs to acquireMachinesLock for "newest-cni-259430"
	I1029 09:11:36.953494  332670 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:11:36.953503  332670 fix.go:54] fixHost starting: 
	I1029 09:11:36.953722  332670 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:36.971874  332670 fix.go:112] recreateIfNeeded on newest-cni-259430: state=Stopped err=<nil>
	W1029 09:11:36.971903  332670 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 09:11:36.973783  332670 out.go:252] * Restarting existing docker container for "newest-cni-259430" ...
	I1029 09:11:36.973850  332670 cli_runner.go:164] Run: docker start newest-cni-259430
	I1029 09:11:37.228962  332670 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:37.249146  332670 kic.go:430] container "newest-cni-259430" state is running.
	I1029 09:11:37.249547  332670 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-259430
	I1029 09:11:37.269403  332670 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json ...
	I1029 09:11:37.269699  332670 machine.go:94] provisionDockerMachine start ...
	I1029 09:11:37.269798  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:37.289555  332670 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:37.289804  332670 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1029 09:11:37.289817  332670 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:11:37.290428  332670 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43456->127.0.0.1:33133: read: connection reset by peer
	I1029 09:11:40.434498  332670 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-259430
	
	I1029 09:11:40.434531  332670 ubuntu.go:182] provisioning hostname "newest-cni-259430"
	I1029 09:11:40.434599  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:40.453493  332670 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:40.453831  332670 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1029 09:11:40.453859  332670 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-259430 && echo "newest-cni-259430" | sudo tee /etc/hostname
	I1029 09:11:40.606076  332670 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-259430
	
	I1029 09:11:40.606142  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:40.625454  332670 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:40.625733  332670 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1029 09:11:40.625754  332670 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-259430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-259430/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-259430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:11:40.767975  332670 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:11:40.768015  332670 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-3727/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-3727/.minikube}
	I1029 09:11:40.768035  332670 ubuntu.go:190] setting up certificates
	I1029 09:11:40.768046  332670 provision.go:84] configureAuth start
	I1029 09:11:40.768112  332670 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-259430
	I1029 09:11:40.787510  332670 provision.go:143] copyHostCerts
	I1029 09:11:40.787579  332670 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem, removing ...
	I1029 09:11:40.787588  332670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem
	I1029 09:11:40.787674  332670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem (1078 bytes)
	I1029 09:11:40.787813  332670 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem, removing ...
	I1029 09:11:40.787827  332670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem
	I1029 09:11:40.787869  332670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem (1123 bytes)
	I1029 09:11:40.787968  332670 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem, removing ...
	I1029 09:11:40.787978  332670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem
	I1029 09:11:40.788032  332670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem (1679 bytes)
	I1029 09:11:40.788132  332670 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem org=jenkins.newest-cni-259430 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-259430]
	I1029 09:11:41.126296  332670 provision.go:177] copyRemoteCerts
	I1029 09:11:41.126358  332670 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:11:41.126393  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:41.145545  332670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:41.247965  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1029 09:11:41.266823  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1029 09:11:41.285573  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1029 09:11:41.304605  332670 provision.go:87] duration metric: took 536.544792ms to configureAuth
	I1029 09:11:41.304635  332670 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:11:41.304822  332670 config.go:182] Loaded profile config "newest-cni-259430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:11:41.304921  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:41.324332  332670 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:41.324605  332670 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1029 09:11:41.324632  332670 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:11:41.598885  332670 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:11:41.598918  332670 machine.go:97] duration metric: took 4.329196359s to provisionDockerMachine
	I1029 09:11:41.598932  332670 start.go:293] postStartSetup for "newest-cni-259430" (driver="docker")
	I1029 09:11:41.598946  332670 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:11:41.599033  332670 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:11:41.599074  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:41.618267  332670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:41.720418  332670 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:11:41.724636  332670 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:11:41.724671  332670 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:11:41.724682  332670 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/addons for local assets ...
	I1029 09:11:41.724740  332670 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/files for local assets ...
	I1029 09:11:41.724815  332670 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem -> 72182.pem in /etc/ssl/certs
	I1029 09:11:41.724901  332670 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:11:41.733149  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:11:41.751940  332670 start.go:296] duration metric: took 152.990446ms for postStartSetup
	I1029 09:11:41.752077  332670 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:11:41.752129  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:41.771288  332670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:41.869451  332670 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:11:41.874389  332670 fix.go:56] duration metric: took 4.920882433s for fixHost
	I1029 09:11:41.874414  332670 start.go:83] releasing machines lock for "newest-cni-259430", held for 4.920931985s
	I1029 09:11:41.874471  332670 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-259430
	I1029 09:11:41.894762  332670 ssh_runner.go:195] Run: cat /version.json
	I1029 09:11:41.894816  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:41.894830  332670 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:11:41.894891  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:41.914876  332670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:41.915192  332670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:42.086224  332670 ssh_runner.go:195] Run: systemctl --version
	I1029 09:11:42.093150  332670 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:11:42.129693  332670 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:11:42.134790  332670 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:11:42.134874  332670 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:11:42.143276  332670 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:11:42.143303  332670 start.go:496] detecting cgroup driver to use...
	I1029 09:11:42.143334  332670 detect.go:190] detected "systemd" cgroup driver on host os
	I1029 09:11:42.143372  332670 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:11:42.158066  332670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:11:42.170816  332670 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:11:42.170867  332670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:11:42.185713  332670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:11:42.199149  332670 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:11:42.282682  332670 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:11:42.364762  332670 docker.go:234] disabling docker service ...
	I1029 09:11:42.364844  332670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:11:42.380347  332670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:11:42.393474  332670 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:11:42.475743  332670 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:11:42.557951  332670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:11:42.571423  332670 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:11:42.586876  332670 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:11:42.586948  332670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:42.596740  332670 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1029 09:11:42.596826  332670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:42.606715  332670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:42.616629  332670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:42.626731  332670 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:11:42.635847  332670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:42.645578  332670 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:42.654790  332670 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:42.664107  332670 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:11:42.672046  332670 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:11:42.680097  332670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:11:42.761870  332670 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:11:42.874833  332670 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:11:42.874888  332670 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:11:42.879088  332670 start.go:564] Will wait 60s for crictl version
	I1029 09:11:42.879155  332670 ssh_runner.go:195] Run: which crictl
	I1029 09:11:42.883055  332670 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:11:42.909854  332670 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:11:42.909935  332670 ssh_runner.go:195] Run: crio --version
	I1029 09:11:42.938913  332670 ssh_runner.go:195] Run: crio --version
	I1029 09:11:42.970611  332670 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:11:42.971807  332670 cli_runner.go:164] Run: docker network inspect newest-cni-259430 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:11:42.989838  332670 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1029 09:11:42.994314  332670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:11:43.007148  332670 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1029 09:11:43.008325  332670 kubeadm.go:884] updating cluster {Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:11:43.008472  332670 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:11:43.008561  332670 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:11:43.042322  332670 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:11:43.042347  332670 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:11:43.042408  332670 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:11:43.069835  332670 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:11:43.069867  332670 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:11:43.069878  332670 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1029 09:11:43.070028  332670 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-259430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:11:43.070115  332670 ssh_runner.go:195] Run: crio config
	I1029 09:11:43.117979  332670 cni.go:84] Creating CNI manager for ""
	I1029 09:11:43.118030  332670 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:11:43.118063  332670 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1029 09:11:43.118096  332670 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-259430 NodeName:newest-cni-259430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:11:43.118270  332670 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-259430"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:11:43.118349  332670 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:11:43.127077  332670 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:11:43.127149  332670 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:11:43.135577  332670 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1029 09:11:43.149793  332670 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:11:43.163258  332670 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1029 09:11:43.176761  332670 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:11:43.180964  332670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:11:43.192178  332670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:11:43.267238  332670 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:11:43.290651  332670 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430 for IP: 192.168.85.2
	I1029 09:11:43.290675  332670 certs.go:195] generating shared ca certs ...
	I1029 09:11:43.290695  332670 certs.go:227] acquiring lock for ca certs: {Name:mk2fcaaead4b0fcf1dc2cfc80d95b3cc12092f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:43.290867  332670 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key
	I1029 09:11:43.290924  332670 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key
	I1029 09:11:43.290938  332670 certs.go:257] generating profile certs ...
	I1029 09:11:43.291089  332670 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.key
	I1029 09:11:43.291155  332670 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key.64cd47c3
	I1029 09:11:43.291203  332670 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.key
	I1029 09:11:43.291343  332670 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem (1338 bytes)
	W1029 09:11:43.291381  332670 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218_empty.pem, impossibly tiny 0 bytes
	I1029 09:11:43.291393  332670 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem (1675 bytes)
	I1029 09:11:43.291421  332670 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem (1078 bytes)
	I1029 09:11:43.291451  332670 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:11:43.291482  332670 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem (1679 bytes)
	I1029 09:11:43.291534  332670 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:11:43.292381  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:11:43.312484  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:11:43.333163  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:11:43.353553  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1029 09:11:43.377459  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1029 09:11:43.396965  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:11:43.416155  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:11:43.434347  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:11:43.452578  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /usr/share/ca-certificates/72182.pem (1708 bytes)
	I1029 09:11:43.470914  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:11:43.488980  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem --> /usr/share/ca-certificates/7218.pem (1338 bytes)
	I1029 09:11:43.506517  332670 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:11:43.519420  332670 ssh_runner.go:195] Run: openssl version
	I1029 09:11:43.525794  332670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72182.pem && ln -fs /usr/share/ca-certificates/72182.pem /etc/ssl/certs/72182.pem"
	I1029 09:11:43.535350  332670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72182.pem
	I1029 09:11:43.539606  332670 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:26 /usr/share/ca-certificates/72182.pem
	I1029 09:11:43.539670  332670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72182.pem
	I1029 09:11:43.575424  332670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72182.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:11:43.584299  332670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:11:43.593704  332670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:11:43.597760  332670 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:11:43.597836  332670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:11:43.631867  332670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:11:43.640751  332670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7218.pem && ln -fs /usr/share/ca-certificates/7218.pem /etc/ssl/certs/7218.pem"
	I1029 09:11:43.649810  332670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7218.pem
	I1029 09:11:43.653879  332670 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:26 /usr/share/ca-certificates/7218.pem
	I1029 09:11:43.653935  332670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7218.pem
	I1029 09:11:43.688289  332670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7218.pem /etc/ssl/certs/51391683.0"
	I1029 09:11:43.696971  332670 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:11:43.701184  332670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:11:43.735920  332670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:11:43.770305  332670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:11:43.811599  332670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:11:43.855541  332670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:11:43.908728  332670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:11:43.965410  332670 kubeadm.go:401] StartCluster: {Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:11:43.965540  332670 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:11:43.965626  332670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:11:43.995873  332670 cri.go:89] found id: "f3e3a0ed6603e3856e3d00a3ba9ea0a088ec7378c1ab94c9e4092df6c8e5ce5c"
	I1029 09:11:43.995901  332670 cri.go:89] found id: "4ab2230f580dbca69681a5d9a921b219bd4d7cef2d8ececb23fbd25a060866df"
	I1029 09:11:43.995906  332670 cri.go:89] found id: "74cc9b0ba8d30a66bc95714a4b556650efaf537c941eca3307d1d9e5161661b1"
	I1029 09:11:43.995914  332670 cri.go:89] found id: "d9d755902ee309db827c42914c9d11cc00e2c96ca199f264674c43e17f1a58bb"
	I1029 09:11:43.995918  332670 cri.go:89] found id: ""
	I1029 09:11:43.995966  332670 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:11:44.008633  332670 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:11:44Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:11:44.008718  332670 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:11:44.017274  332670 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:11:44.017294  332670 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:11:44.017345  332670 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:11:44.025333  332670 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:11:44.025758  332670 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-259430" does not appear in /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:11:44.025878  332670 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-3727/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-259430" cluster setting kubeconfig missing "newest-cni-259430" context setting]
	I1029 09:11:44.026174  332670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:44.027451  332670 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:11:44.035974  332670 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1029 09:11:44.036039  332670 kubeadm.go:602] duration metric: took 18.736631ms to restartPrimaryControlPlane
	I1029 09:11:44.036054  332670 kubeadm.go:403] duration metric: took 70.654266ms to StartCluster
	I1029 09:11:44.036077  332670 settings.go:142] acquiring lock: {Name:mk07eebd81bddcab3dc3d429be8b09770a1732f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:44.036154  332670 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:11:44.036756  332670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:44.037049  332670 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:11:44.037175  332670 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:11:44.037286  332670 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-259430"
	I1029 09:11:44.037306  332670 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-259430"
	I1029 09:11:44.037309  332670 config.go:182] Loaded profile config "newest-cni-259430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:11:44.037312  332670 addons.go:70] Setting dashboard=true in profile "newest-cni-259430"
	I1029 09:11:44.037323  332670 addons.go:70] Setting default-storageclass=true in profile "newest-cni-259430"
	I1029 09:11:44.037333  332670 addons.go:239] Setting addon dashboard=true in "newest-cni-259430"
	I1029 09:11:44.037340  332670 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-259430"
	W1029 09:11:44.037315  332670 addons.go:248] addon storage-provisioner should already be in state true
	I1029 09:11:44.037388  332670 host.go:66] Checking if "newest-cni-259430" exists ...
	W1029 09:11:44.037357  332670 addons.go:248] addon dashboard should already be in state true
	I1029 09:11:44.037470  332670 host.go:66] Checking if "newest-cni-259430" exists ...
	I1029 09:11:44.037700  332670 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:44.037862  332670 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:44.037943  332670 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:44.039500  332670 out.go:179] * Verifying Kubernetes components...
	I1029 09:11:44.040886  332670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:11:44.065220  332670 addons.go:239] Setting addon default-storageclass=true in "newest-cni-259430"
	W1029 09:11:44.065247  332670 addons.go:248] addon default-storageclass should already be in state true
	I1029 09:11:44.065275  332670 host.go:66] Checking if "newest-cni-259430" exists ...
	I1029 09:11:44.065727  332670 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:44.066837  332670 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1029 09:11:44.066837  332670 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:11:44.068425  332670 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:11:44.068463  332670 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:11:44.068472  332670 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1029 09:11:44.068533  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:44.069772  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1029 09:11:44.069805  332670 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1029 09:11:44.069870  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:44.096760  332670 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:11:44.096785  332670 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:11:44.096849  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:44.106721  332670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:44.110167  332670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:44.123868  332670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:44.197667  332670 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:11:44.212755  332670 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:11:44.212855  332670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:11:44.223803  332670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:11:44.225610  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1029 09:11:44.225633  332670 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1029 09:11:44.228099  332670 api_server.go:72] duration metric: took 191.015519ms to wait for apiserver process to appear ...
	I1029 09:11:44.228121  332670 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:11:44.228145  332670 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:11:44.237189  332670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:11:44.242330  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1029 09:11:44.242356  332670 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1029 09:11:44.260059  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1029 09:11:44.260104  332670 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1029 09:11:44.281398  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1029 09:11:44.281423  332670 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1029 09:11:44.299849  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1029 09:11:44.299876  332670 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1029 09:11:44.314553  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1029 09:11:44.314575  332670 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1029 09:11:44.328677  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1029 09:11:44.328703  332670 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1029 09:11:44.342209  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1029 09:11:44.342238  332670 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1029 09:11:44.355907  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:11:44.355933  332670 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1029 09:11:44.369828  332670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:11:45.398884  332670 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1029 09:11:45.398927  332670 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1029 09:11:45.398966  332670 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:11:45.408784  332670 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1029 09:11:45.408813  332670 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1029 09:11:45.729098  332670 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:11:45.733626  332670 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:11:45.733662  332670 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:11:45.927795  332670 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.703955868s)
	I1029 09:11:45.927850  332670 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.690634134s)
	I1029 09:11:45.927910  332670 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.558046287s)
	I1029 09:11:45.929525  332670 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-259430 addons enable metrics-server
	
	I1029 09:11:45.939749  332670 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1029 09:11:45.941261  332670 addons.go:515] duration metric: took 1.904092997s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1029 09:11:46.229071  332670 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:11:46.233856  332670 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:11:46.233891  332670 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:11:46.729203  332670 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:11:46.734256  332670 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:11:46.735487  332670 api_server.go:141] control plane version: v1.34.1
	I1029 09:11:46.735512  332670 api_server.go:131] duration metric: took 2.507384146s to wait for apiserver health ...
	I1029 09:11:46.735521  332670 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:11:46.739506  332670 system_pods.go:59] 8 kube-system pods found
	I1029 09:11:46.739543  332670 system_pods.go:61] "coredns-66bc5c9577-k74f5" [d32eecf7-613f-43fe-87b6-1c56dc6f7837] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1029 09:11:46.739553  332670 system_pods.go:61] "etcd-newest-cni-259430" [21bef91b-1e23-4c0b-836a-7d38dbcd158d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:11:46.739563  332670 system_pods.go:61] "kindnet-4555c" [e9503ed8-3583-471b-8ed2-cb19fa55932f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 09:11:46.739569  332670 system_pods.go:61] "kube-apiserver-newest-cni-259430" [e2aa2d83-bd57-4b42-9f74-cc369442fb48] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:11:46.739578  332670 system_pods.go:61] "kube-controller-manager-newest-cni-259430" [c8b1f927-8450-4b3d-8380-0d74388f7b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:11:46.739583  332670 system_pods.go:61] "kube-proxy-md8mn" [5b216c8f-e72c-44bd-ac4a-4f07213f90bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:11:46.739597  332670 system_pods.go:61] "kube-scheduler-newest-cni-259430" [6dffb3f4-a5a2-456f-bfe4-34c2a0916645] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:11:46.739619  332670 system_pods.go:61] "storage-provisioner" [b614d976-a2b2-4dff-9276-58ac33de3f70] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1029 09:11:46.739631  332670 system_pods.go:74] duration metric: took 4.102287ms to wait for pod list to return data ...
	I1029 09:11:46.739642  332670 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:11:46.742381  332670 default_sa.go:45] found service account: "default"
	I1029 09:11:46.742410  332670 default_sa.go:55] duration metric: took 2.76086ms for default service account to be created ...
	I1029 09:11:46.742426  332670 kubeadm.go:587] duration metric: took 2.70534646s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:11:46.742455  332670 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:11:46.745725  332670 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 09:11:46.745760  332670 node_conditions.go:123] node cpu capacity is 8
	I1029 09:11:46.745776  332670 node_conditions.go:105] duration metric: took 3.311056ms to run NodePressure ...
	I1029 09:11:46.745791  332670 start.go:242] waiting for startup goroutines ...
	I1029 09:11:46.745801  332670 start.go:247] waiting for cluster config update ...
	I1029 09:11:46.745818  332670 start.go:256] writing updated cluster config ...
	I1029 09:11:46.746138  332670 ssh_runner.go:195] Run: rm -f paused
	I1029 09:11:46.802160  332670 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:11:46.803803  332670 out.go:179] * Done! kubectl is now configured to use "newest-cni-259430" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.671316898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.674345459Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=bf7befaa-313b-42c3-8b2c-8866b93460dc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.675055158Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1e0e615f-5dbf-4e5e-954a-74656e291f75 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.675907592Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.676347448Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.676806637Z" level=info msg="Ran pod sandbox 3a541d035871e950f8ff5752ebd73a3a7b52a37040ffebffba23fed7ab51cc7b with infra container: kube-system/kindnet-4555c/POD" id=bf7befaa-313b-42c3-8b2c-8866b93460dc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.676878251Z" level=info msg="Ran pod sandbox 665cd2bd6c5ee4e7103fc7eb94b92fc96af4d011437dd3d05d1246e55bc0c848 with infra container: kube-system/kube-proxy-md8mn/POD" id=1e0e615f-5dbf-4e5e-954a-74656e291f75 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.677954828Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0e5bd556-6b77-489d-afcb-3d774eab1384 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.677979595Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=0418c405-d8ca-4cea-9a86-77b7bb37a73a name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.678926285Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7a41f3e3-aaf7-41ab-99af-485e0cbdef03 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.678949639Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=1ef63b1b-1853-4326-a714-f6f79b067f0e name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.680095649Z" level=info msg="Creating container: kube-system/kindnet-4555c/kindnet-cni" id=29b725ce-6a19-4efa-8c33-f73aaaeccc3c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.680198844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.680098694Z" level=info msg="Creating container: kube-system/kube-proxy-md8mn/kube-proxy" id=c4a880ee-3e7b-4372-ac62-5ce3950616de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.680451989Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.684792009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.685358159Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.687457468Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.688029336Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.713750828Z" level=info msg="Created container ed1be3dde08f42b430597b1029ed66daa0d8f54f49214564bd4e8923ad921eea: kube-system/kindnet-4555c/kindnet-cni" id=29b725ce-6a19-4efa-8c33-f73aaaeccc3c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.714400073Z" level=info msg="Starting container: ed1be3dde08f42b430597b1029ed66daa0d8f54f49214564bd4e8923ad921eea" id=1bc80fa6-267f-4d65-8c04-26839de8c199 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.716097473Z" level=info msg="Started container" PID=1044 containerID=ed1be3dde08f42b430597b1029ed66daa0d8f54f49214564bd4e8923ad921eea description=kube-system/kindnet-4555c/kindnet-cni id=1bc80fa6-267f-4d65-8c04-26839de8c199 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a541d035871e950f8ff5752ebd73a3a7b52a37040ffebffba23fed7ab51cc7b
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.716615084Z" level=info msg="Created container 9379d9fd0f7b6aca31b8d7be1960ce0c4a30429b454b9e473233044fb3e049fe: kube-system/kube-proxy-md8mn/kube-proxy" id=c4a880ee-3e7b-4372-ac62-5ce3950616de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.717293917Z" level=info msg="Starting container: 9379d9fd0f7b6aca31b8d7be1960ce0c4a30429b454b9e473233044fb3e049fe" id=4600b622-7197-48c3-9d67-f879d70e1a03 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.719947186Z" level=info msg="Started container" PID=1045 containerID=9379d9fd0f7b6aca31b8d7be1960ce0c4a30429b454b9e473233044fb3e049fe description=kube-system/kube-proxy-md8mn/kube-proxy id=4600b622-7197-48c3-9d67-f879d70e1a03 name=/runtime.v1.RuntimeService/StartContainer sandboxID=665cd2bd6c5ee4e7103fc7eb94b92fc96af4d011437dd3d05d1246e55bc0c848
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9379d9fd0f7b6       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   665cd2bd6c5ee       kube-proxy-md8mn                            kube-system
	ed1be3dde08f4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   3a541d035871e       kindnet-4555c                               kube-system
	f3e3a0ed6603e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 seconds ago       Running             etcd                      1                   081e95130ca23       etcd-newest-cni-259430                      kube-system
	4ab2230f580db       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   6 seconds ago       Running             kube-controller-manager   1                   e8f091858a617       kube-controller-manager-newest-cni-259430   kube-system
	74cc9b0ba8d30       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   6 seconds ago       Running             kube-apiserver            1                   be993b1602e71       kube-apiserver-newest-cni-259430            kube-system
	d9d755902ee30       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   6 seconds ago       Running             kube-scheduler            1                   28d9cf116c054       kube-scheduler-newest-cni-259430            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-259430
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-259430
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=newest-cni-259430
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_11_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:11:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-259430
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:11:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:11:45 +0000   Wed, 29 Oct 2025 09:11:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:11:45 +0000   Wed, 29 Oct 2025 09:11:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:11:45 +0000   Wed, 29 Oct 2025 09:11:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 29 Oct 2025 09:11:45 +0000   Wed, 29 Oct 2025 09:11:15 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-259430
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                b0b59dc6-8cfb-44ff-8492-2c787c88523a
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-259430                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-4555c                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-259430             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-newest-cni-259430    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-md8mn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-259430             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node newest-cni-259430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node newest-cni-259430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node newest-cni-259430 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node newest-cni-259430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node newest-cni-259430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node newest-cni-259430 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node newest-cni-259430 event: Registered Node newest-cni-259430 in Controller
	  Normal  RegisteredNode           2s                 node-controller  Node newest-cni-259430 event: Registered Node newest-cni-259430 in Controller
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[  +7.860471] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea d1 bc 55 fa d3 08 06
	[  +0.057230] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[  +7.379065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 86 de 12 5c b8 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[Oct29 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 3e ab f0 10 3c 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 16 ea 00 8c 1d 08 06
	[  +4.650960] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 a3 db 56 0e bd 08 06
	[  +0.000357] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[ +10.158654] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	[Oct29 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 62 44 12 a2 6e 08 06
	[  +0.000472] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	
	
	==> etcd [f3e3a0ed6603e3856e3d00a3ba9ea0a088ec7378c1ab94c9e4092df6c8e5ce5c] <==
	{"level":"warn","ts":"2025-10-29T09:11:44.761028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.767556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.776133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.782469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.788982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.794979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.802121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.810316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.823868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.833181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.840392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.847041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.853631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.861110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.867948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.875079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.882627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.890510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.897034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.905266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.911585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.924022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.937392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.943485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.989196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38876","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:11:50 up 54 min,  0 user,  load average: 3.52, 3.98, 2.66
	Linux newest-cni-259430 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ed1be3dde08f42b430597b1029ed66daa0d8f54f49214564bd4e8923ad921eea] <==
	I1029 09:11:46.890452       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:11:46.984613       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:11:46.984761       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:11:46.984777       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:11:46.984816       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:11:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:11:47.186337       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:11:47.186375       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:11:47.186390       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:11:47.187205       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:11:47.586692       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:11:47.586724       1 metrics.go:72] Registering metrics
	I1029 09:11:47.586815       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [74cc9b0ba8d30a66bc95714a4b556650efaf537c941eca3307d1d9e5161661b1] <==
	I1029 09:11:45.481487       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1029 09:11:45.481737       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1029 09:11:45.481763       1 policy_source.go:240] refreshing policies
	I1029 09:11:45.485972       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:11:45.495769       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:11:45.497886       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 09:11:45.498134       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 09:11:45.498231       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:11:45.498247       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:11:45.498256       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:11:45.498263       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:11:45.501914       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1029 09:11:45.508895       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:11:45.739281       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:11:45.778737       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:11:45.802768       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:11:45.810796       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:11:45.819051       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:11:45.856734       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.188.226"}
	I1029 09:11:45.871449       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.197.141"}
	I1029 09:11:46.384609       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:11:48.817044       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:11:49.166880       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:11:49.266464       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:11:49.266506       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4ab2230f580dbca69681a5d9a921b219bd4d7cef2d8ececb23fbd25a060866df] <==
	I1029 09:11:48.813116       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1029 09:11:48.813147       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 09:11:48.813164       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:11:48.813170       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:11:48.813190       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:11:48.813202       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:11:48.813214       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:11:48.813301       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 09:11:48.813451       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 09:11:48.813470       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:11:48.813470       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 09:11:48.814903       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:11:48.814921       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:11:48.814930       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:11:48.815272       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:11:48.818229       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 09:11:48.818244       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:11:48.818307       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 09:11:48.820573       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1029 09:11:48.823565       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1029 09:11:48.825154       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-259430"
	I1029 09:11:48.825250       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1029 09:11:48.826059       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 09:11:48.829309       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:11:48.842816       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9379d9fd0f7b6aca31b8d7be1960ce0c4a30429b454b9e473233044fb3e049fe] <==
	I1029 09:11:46.758022       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:11:46.831600       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:11:46.932498       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:11:46.932541       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1029 09:11:46.932684       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:11:46.953120       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:11:46.953194       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:11:46.958685       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:11:46.959084       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:11:46.959124       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:11:46.960275       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:11:46.960300       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:11:46.960368       1 config.go:200] "Starting service config controller"
	I1029 09:11:46.960379       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:11:46.960415       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:11:46.960433       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:11:46.960590       1 config.go:309] "Starting node config controller"
	I1029 09:11:46.960616       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:11:46.960624       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:11:47.060525       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:11:47.060544       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 09:11:47.060554       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d9d755902ee309db827c42914c9d11cc00e2c96ca199f264674c43e17f1a58bb] <==
	I1029 09:11:44.096780       1 serving.go:386] Generated self-signed cert in-memory
	W1029 09:11:45.415773       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:11:45.415825       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:11:45.415839       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:11:45.415850       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:11:45.437045       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:11:45.437081       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:11:45.440232       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:11:45.440284       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:11:45.440765       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:11:45.442746       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:11:45.540947       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: I1029 09:11:45.503162     672 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: I1029 09:11:45.503205     672 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: I1029 09:11:45.504165     672 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: E1029 09:11:45.514046     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-259430\" already exists" pod="kube-system/etcd-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: E1029 09:11:45.517184     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-259430\" already exists" pod="kube-system/kube-scheduler-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: E1029 09:11:45.517188     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-259430\" already exists" pod="kube-system/kube-apiserver-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: E1029 09:11:45.580765     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-259430\" already exists" pod="kube-system/kube-apiserver-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: I1029 09:11:45.580809     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: E1029 09:11:45.588629     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-259430\" already exists" pod="kube-system/kube-controller-manager-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: I1029 09:11:45.588668     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: E1029 09:11:45.595612     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-259430\" already exists" pod="kube-system/kube-scheduler-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: I1029 09:11:45.595649     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: E1029 09:11:45.602051     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-259430\" already exists" pod="kube-system/etcd-newest-cni-259430"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: I1029 09:11:46.328309     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-259430"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: E1029 09:11:46.335768     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-259430\" already exists" pod="kube-system/kube-controller-manager-newest-cni-259430"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: I1029 09:11:46.362758     672 apiserver.go:52] "Watching apiserver"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: I1029 09:11:46.464952     672 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: I1029 09:11:46.508858     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b216c8f-e72c-44bd-ac4a-4f07213f90bb-xtables-lock\") pod \"kube-proxy-md8mn\" (UID: \"5b216c8f-e72c-44bd-ac4a-4f07213f90bb\") " pod="kube-system/kube-proxy-md8mn"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: I1029 09:11:46.508918     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9503ed8-3583-471b-8ed2-cb19fa55932f-lib-modules\") pod \"kindnet-4555c\" (UID: \"e9503ed8-3583-471b-8ed2-cb19fa55932f\") " pod="kube-system/kindnet-4555c"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: I1029 09:11:46.508967     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e9503ed8-3583-471b-8ed2-cb19fa55932f-cni-cfg\") pod \"kindnet-4555c\" (UID: \"e9503ed8-3583-471b-8ed2-cb19fa55932f\") " pod="kube-system/kindnet-4555c"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: I1029 09:11:46.509039     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9503ed8-3583-471b-8ed2-cb19fa55932f-xtables-lock\") pod \"kindnet-4555c\" (UID: \"e9503ed8-3583-471b-8ed2-cb19fa55932f\") " pod="kube-system/kindnet-4555c"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: I1029 09:11:46.509061     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b216c8f-e72c-44bd-ac4a-4f07213f90bb-lib-modules\") pod \"kube-proxy-md8mn\" (UID: \"5b216c8f-e72c-44bd-ac4a-4f07213f90bb\") " pod="kube-system/kube-proxy-md8mn"
	Oct 29 09:11:47 newest-cni-259430 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:11:47 newest-cni-259430 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:11:47 newest-cni-259430 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-259430 -n newest-cni-259430
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-259430 -n newest-cni-259430: exit status 2 (355.534065ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-259430 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-k74f5 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-kb7r8 kubernetes-dashboard-855c9754f9-trgmp
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-259430 describe pod coredns-66bc5c9577-k74f5 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-kb7r8 kubernetes-dashboard-855c9754f9-trgmp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-259430 describe pod coredns-66bc5c9577-k74f5 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-kb7r8 kubernetes-dashboard-855c9754f9-trgmp: exit status 1 (64.159063ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-k74f5" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-kb7r8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-trgmp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-259430 describe pod coredns-66bc5c9577-k74f5 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-kb7r8 kubernetes-dashboard-855c9754f9-trgmp: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-259430
helpers_test.go:243: (dbg) docker inspect newest-cni-259430:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb",
	        "Created": "2025-10-29T09:11:05.338331033Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 332868,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-29T09:11:36.999957877Z",
	            "FinishedAt": "2025-10-29T09:11:36.114284506Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb/hostname",
	        "HostsPath": "/var/lib/docker/containers/898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb/hosts",
	        "LogPath": "/var/lib/docker/containers/898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb/898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb-json.log",
	        "Name": "/newest-cni-259430",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-259430:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-259430",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "898af032bdf916aa3257aea11cfb759af7db6b23d1e0d9cb500e9d46b2622ebb",
	                "LowerDir": "/var/lib/docker/overlay2/d4dfbd09fa5e0cf3f5f114acf8641b739db6281f40165e806f5f59b8b1f6d1fb-init/diff:/var/lib/docker/overlay2/811787aa3f0030913a9ea9493b86dfcc2b57837165b334ef8445b678aa25f23d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d4dfbd09fa5e0cf3f5f114acf8641b739db6281f40165e806f5f59b8b1f6d1fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d4dfbd09fa5e0cf3f5f114acf8641b739db6281f40165e806f5f59b8b1f6d1fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d4dfbd09fa5e0cf3f5f114acf8641b739db6281f40165e806f5f59b8b1f6d1fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-259430",
	                "Source": "/var/lib/docker/volumes/newest-cni-259430/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-259430",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-259430",
	                "name.minikube.sigs.k8s.io": "newest-cni-259430",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "621d55fd358f7a63a391e5675bf5022ea6541694211944060b97c67a8f5e9041",
	            "SandboxKey": "/var/run/docker/netns/621d55fd358f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-259430": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:ed:04:22:05:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "52c784c79ded45742c986e67d511ad367789016db5dda3c8e7a6f446705f967c",
	                    "EndpointID": "23f77a7b1d7cfb5043ba09aa2e433ab95e1c40d42f1e03d727a3cce0fde70d07",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-259430",
	                        "898af032bdf9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-259430 -n newest-cni-259430
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-259430 -n newest-cni-259430: exit status 2 (334.314206ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-259430 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-017274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:11 UTC │
	│ image   │ old-k8s-version-096492 image list --format=json                                                                                                                                                                                               │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p old-k8s-version-096492 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ image   │ embed-certs-834228 image list --format=json                                                                                                                                                                                                   │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p embed-certs-834228 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ image   │ no-preload-043790 image list --format=json                                                                                                                                                                                                    │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ pause   │ -p no-preload-043790 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ delete  │ -p old-k8s-version-096492                                                                                                                                                                                                                     │ old-k8s-version-096492       │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:10 UTC │
	│ start   │ -p newest-cni-259430 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p embed-certs-834228                                                                                                                                                                                                                         │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p no-preload-043790                                                                                                                                                                                                                          │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p embed-certs-834228                                                                                                                                                                                                                         │ embed-certs-834228           │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p no-preload-043790                                                                                                                                                                                                                          │ no-preload-043790            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ addons  │ enable metrics-server -p newest-cni-259430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │                     │
	│ stop    │ -p newest-cni-259430 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ image   │ default-k8s-diff-port-017274 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ pause   │ -p default-k8s-diff-port-017274 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-017274                                                                                                                                                                                                               │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ addons  │ enable dashboard -p newest-cni-259430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ start   │ -p newest-cni-259430 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p default-k8s-diff-port-017274                                                                                                                                                                                                               │ default-k8s-diff-port-017274 │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ image   │ newest-cni-259430 image list --format=json                                                                                                                                                                                                    │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ pause   │ -p newest-cni-259430 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-259430            │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:11:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:11:36.756662  332670 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:11:36.756934  332670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:11:36.756942  332670 out.go:374] Setting ErrFile to fd 2...
	I1029 09:11:36.756947  332670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:11:36.757183  332670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:11:36.757626  332670 out.go:368] Setting JSON to false
	I1029 09:11:36.758720  332670 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3245,"bootTime":1761725852,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:11:36.758808  332670 start.go:143] virtualization: kvm guest
	I1029 09:11:36.760726  332670 out.go:179] * [newest-cni-259430] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:11:36.762031  332670 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:11:36.762042  332670 notify.go:221] Checking for updates...
	I1029 09:11:36.764458  332670 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:11:36.765702  332670 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:11:36.770278  332670 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:11:36.771737  332670 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:11:36.773013  332670 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:11:36.774824  332670 config.go:182] Loaded profile config "newest-cni-259430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:11:36.775540  332670 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:11:36.801801  332670 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:11:36.801948  332670 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:11:36.864465  332670 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-29 09:11:36.854076219 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:11:36.864577  332670 docker.go:319] overlay module found
	I1029 09:11:36.866450  332670 out.go:179] * Using the docker driver based on existing profile
	I1029 09:11:36.867623  332670 start.go:309] selected driver: docker
	I1029 09:11:36.867643  332670 start.go:930] validating driver "docker" against &{Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:11:36.867749  332670 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:11:36.868376  332670 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:11:36.926679  332670 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-29 09:11:36.916086544 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:11:36.927013  332670 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:11:36.927090  332670 cni.go:84] Creating CNI manager for ""
	I1029 09:11:36.927162  332670 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:11:36.927250  332670 start.go:353] cluster config:
	{Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:11:36.929052  332670 out.go:179] * Starting "newest-cni-259430" primary control-plane node in "newest-cni-259430" cluster
	I1029 09:11:36.930119  332670 cache.go:124] Beginning downloading kic base image for docker with crio
	I1029 09:11:36.931204  332670 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1029 09:11:36.932310  332670 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:11:36.932335  332670 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1029 09:11:36.932353  332670 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:11:36.932362  332670 cache.go:59] Caching tarball of preloaded images
	I1029 09:11:36.932459  332670 preload.go:233] Found /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:11:36.932483  332670 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:11:36.932615  332670 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json ...
	I1029 09:11:36.953333  332670 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1029 09:11:36.953357  332670 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1029 09:11:36.953377  332670 cache.go:233] Successfully downloaded all kic artifacts
	I1029 09:11:36.953403  332670 start.go:360] acquireMachinesLock for newest-cni-259430: {Name:mk9f7a4924e0dc30dd9007c8d213cb8c4076ee8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:11:36.953471  332670 start.go:364] duration metric: took 45.255µs to acquireMachinesLock for "newest-cni-259430"
	I1029 09:11:36.953494  332670 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:11:36.953503  332670 fix.go:54] fixHost starting: 
	I1029 09:11:36.953722  332670 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:36.971874  332670 fix.go:112] recreateIfNeeded on newest-cni-259430: state=Stopped err=<nil>
	W1029 09:11:36.971903  332670 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 09:11:36.973783  332670 out.go:252] * Restarting existing docker container for "newest-cni-259430" ...
	I1029 09:11:36.973850  332670 cli_runner.go:164] Run: docker start newest-cni-259430
	I1029 09:11:37.228962  332670 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:37.249146  332670 kic.go:430] container "newest-cni-259430" state is running.
	I1029 09:11:37.249547  332670 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-259430
	I1029 09:11:37.269403  332670 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/config.json ...
	I1029 09:11:37.269699  332670 machine.go:94] provisionDockerMachine start ...
	I1029 09:11:37.269798  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:37.289555  332670 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:37.289804  332670 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1029 09:11:37.289817  332670 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:11:37.290428  332670 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43456->127.0.0.1:33133: read: connection reset by peer
	I1029 09:11:40.434498  332670 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-259430
	
	I1029 09:11:40.434531  332670 ubuntu.go:182] provisioning hostname "newest-cni-259430"
	I1029 09:11:40.434599  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:40.453493  332670 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:40.453831  332670 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1029 09:11:40.453859  332670 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-259430 && echo "newest-cni-259430" | sudo tee /etc/hostname
	I1029 09:11:40.606076  332670 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-259430
	
	I1029 09:11:40.606142  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:40.625454  332670 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:40.625733  332670 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1029 09:11:40.625754  332670 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-259430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-259430/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-259430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:11:40.767975  332670 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:11:40.768015  332670 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21800-3727/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-3727/.minikube}
	I1029 09:11:40.768035  332670 ubuntu.go:190] setting up certificates
	I1029 09:11:40.768046  332670 provision.go:84] configureAuth start
	I1029 09:11:40.768112  332670 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-259430
	I1029 09:11:40.787510  332670 provision.go:143] copyHostCerts
	I1029 09:11:40.787579  332670 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem, removing ...
	I1029 09:11:40.787588  332670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem
	I1029 09:11:40.787674  332670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/ca.pem (1078 bytes)
	I1029 09:11:40.787813  332670 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem, removing ...
	I1029 09:11:40.787827  332670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem
	I1029 09:11:40.787869  332670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/cert.pem (1123 bytes)
	I1029 09:11:40.787968  332670 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem, removing ...
	I1029 09:11:40.787978  332670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem
	I1029 09:11:40.788032  332670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-3727/.minikube/key.pem (1679 bytes)
	I1029 09:11:40.788132  332670 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem org=jenkins.newest-cni-259430 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-259430]
	I1029 09:11:41.126296  332670 provision.go:177] copyRemoteCerts
	I1029 09:11:41.126358  332670 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:11:41.126393  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:41.145545  332670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:41.247965  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1029 09:11:41.266823  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1029 09:11:41.285573  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1029 09:11:41.304605  332670 provision.go:87] duration metric: took 536.544792ms to configureAuth
	I1029 09:11:41.304635  332670 ubuntu.go:206] setting minikube options for container-runtime
	I1029 09:11:41.304822  332670 config.go:182] Loaded profile config "newest-cni-259430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:11:41.304921  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:41.324332  332670 main.go:143] libmachine: Using SSH client type: native
	I1029 09:11:41.324605  332670 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1029 09:11:41.324632  332670 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:11:41.598885  332670 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:11:41.598918  332670 machine.go:97] duration metric: took 4.329196359s to provisionDockerMachine
	I1029 09:11:41.598932  332670 start.go:293] postStartSetup for "newest-cni-259430" (driver="docker")
	I1029 09:11:41.598946  332670 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:11:41.599033  332670 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:11:41.599074  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:41.618267  332670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:41.720418  332670 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:11:41.724636  332670 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1029 09:11:41.724671  332670 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1029 09:11:41.724682  332670 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/addons for local assets ...
	I1029 09:11:41.724740  332670 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-3727/.minikube/files for local assets ...
	I1029 09:11:41.724815  332670 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem -> 72182.pem in /etc/ssl/certs
	I1029 09:11:41.724901  332670 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:11:41.733149  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:11:41.751940  332670 start.go:296] duration metric: took 152.990446ms for postStartSetup
	I1029 09:11:41.752077  332670 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:11:41.752129  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:41.771288  332670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:41.869451  332670 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1029 09:11:41.874389  332670 fix.go:56] duration metric: took 4.920882433s for fixHost
	I1029 09:11:41.874414  332670 start.go:83] releasing machines lock for "newest-cni-259430", held for 4.920931985s
	I1029 09:11:41.874471  332670 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-259430
	I1029 09:11:41.894762  332670 ssh_runner.go:195] Run: cat /version.json
	I1029 09:11:41.894816  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:41.894830  332670 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:11:41.894891  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:41.914876  332670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:41.915192  332670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:42.086224  332670 ssh_runner.go:195] Run: systemctl --version
	I1029 09:11:42.093150  332670 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:11:42.129693  332670 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:11:42.134790  332670 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:11:42.134874  332670 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:11:42.143276  332670 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:11:42.143303  332670 start.go:496] detecting cgroup driver to use...
	I1029 09:11:42.143334  332670 detect.go:190] detected "systemd" cgroup driver on host os
	I1029 09:11:42.143372  332670 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:11:42.158066  332670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:11:42.170816  332670 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:11:42.170867  332670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:11:42.185713  332670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:11:42.199149  332670 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:11:42.282682  332670 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:11:42.364762  332670 docker.go:234] disabling docker service ...
	I1029 09:11:42.364844  332670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:11:42.380347  332670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:11:42.393474  332670 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:11:42.475743  332670 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:11:42.557951  332670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:11:42.571423  332670 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:11:42.586876  332670 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:11:42.586948  332670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:42.596740  332670 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1029 09:11:42.596826  332670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:42.606715  332670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:42.616629  332670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:42.626731  332670 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:11:42.635847  332670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:42.645578  332670 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:42.654790  332670 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:11:42.664107  332670 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:11:42.672046  332670 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:11:42.680097  332670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:11:42.761870  332670 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:11:42.874833  332670 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:11:42.874888  332670 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:11:42.879088  332670 start.go:564] Will wait 60s for crictl version
	I1029 09:11:42.879155  332670 ssh_runner.go:195] Run: which crictl
	I1029 09:11:42.883055  332670 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1029 09:11:42.909854  332670 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1029 09:11:42.909935  332670 ssh_runner.go:195] Run: crio --version
	I1029 09:11:42.938913  332670 ssh_runner.go:195] Run: crio --version
	I1029 09:11:42.970611  332670 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1029 09:11:42.971807  332670 cli_runner.go:164] Run: docker network inspect newest-cni-259430 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1029 09:11:42.989838  332670 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1029 09:11:42.994314  332670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:11:43.007148  332670 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1029 09:11:43.008325  332670 kubeadm.go:884] updating cluster {Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:11:43.008472  332670 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:11:43.008561  332670 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:11:43.042322  332670 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:11:43.042347  332670 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:11:43.042408  332670 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:11:43.069835  332670 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:11:43.069867  332670 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:11:43.069878  332670 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1029 09:11:43.070028  332670 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-259430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:11:43.070115  332670 ssh_runner.go:195] Run: crio config
	I1029 09:11:43.117979  332670 cni.go:84] Creating CNI manager for ""
	I1029 09:11:43.118030  332670 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1029 09:11:43.118063  332670 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1029 09:11:43.118096  332670 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-259430 NodeName:newest-cni-259430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:11:43.118270  332670 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-259430"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:11:43.118349  332670 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:11:43.127077  332670 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:11:43.127149  332670 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:11:43.135577  332670 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1029 09:11:43.149793  332670 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:11:43.163258  332670 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1029 09:11:43.176761  332670 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1029 09:11:43.180964  332670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:11:43.192178  332670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:11:43.267238  332670 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:11:43.290651  332670 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430 for IP: 192.168.85.2
	I1029 09:11:43.290675  332670 certs.go:195] generating shared ca certs ...
	I1029 09:11:43.290695  332670 certs.go:227] acquiring lock for ca certs: {Name:mk2fcaaead4b0fcf1dc2cfc80d95b3cc12092f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:43.290867  332670 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key
	I1029 09:11:43.290924  332670 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key
	I1029 09:11:43.290938  332670 certs.go:257] generating profile certs ...
	I1029 09:11:43.291089  332670 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/client.key
	I1029 09:11:43.291155  332670 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key.64cd47c3
	I1029 09:11:43.291203  332670 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.key
	I1029 09:11:43.291343  332670 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem (1338 bytes)
	W1029 09:11:43.291381  332670 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218_empty.pem, impossibly tiny 0 bytes
	I1029 09:11:43.291393  332670 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca-key.pem (1675 bytes)
	I1029 09:11:43.291421  332670 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/ca.pem (1078 bytes)
	I1029 09:11:43.291451  332670 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:11:43.291482  332670 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/certs/key.pem (1679 bytes)
	I1029 09:11:43.291534  332670 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem (1708 bytes)
	I1029 09:11:43.292381  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:11:43.312484  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1029 09:11:43.333163  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:11:43.353553  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1029 09:11:43.377459  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1029 09:11:43.396965  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:11:43.416155  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:11:43.434347  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/newest-cni-259430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:11:43.452578  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/ssl/certs/72182.pem --> /usr/share/ca-certificates/72182.pem (1708 bytes)
	I1029 09:11:43.470914  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:11:43.488980  332670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-3727/.minikube/certs/7218.pem --> /usr/share/ca-certificates/7218.pem (1338 bytes)
	I1029 09:11:43.506517  332670 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:11:43.519420  332670 ssh_runner.go:195] Run: openssl version
	I1029 09:11:43.525794  332670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72182.pem && ln -fs /usr/share/ca-certificates/72182.pem /etc/ssl/certs/72182.pem"
	I1029 09:11:43.535350  332670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72182.pem
	I1029 09:11:43.539606  332670 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:26 /usr/share/ca-certificates/72182.pem
	I1029 09:11:43.539670  332670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72182.pem
	I1029 09:11:43.575424  332670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72182.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:11:43.584299  332670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:11:43.593704  332670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:11:43.597760  332670 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:11:43.597836  332670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:11:43.631867  332670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:11:43.640751  332670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7218.pem && ln -fs /usr/share/ca-certificates/7218.pem /etc/ssl/certs/7218.pem"
	I1029 09:11:43.649810  332670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7218.pem
	I1029 09:11:43.653879  332670 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:26 /usr/share/ca-certificates/7218.pem
	I1029 09:11:43.653935  332670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7218.pem
	I1029 09:11:43.688289  332670 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7218.pem /etc/ssl/certs/51391683.0"
	I1029 09:11:43.696971  332670 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:11:43.701184  332670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:11:43.735920  332670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:11:43.770305  332670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:11:43.811599  332670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:11:43.855541  332670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:11:43.908728  332670 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:11:43.965410  332670 kubeadm.go:401] StartCluster: {Name:newest-cni-259430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-259430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:11:43.965540  332670 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:11:43.965626  332670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:11:43.995873  332670 cri.go:89] found id: "f3e3a0ed6603e3856e3d00a3ba9ea0a088ec7378c1ab94c9e4092df6c8e5ce5c"
	I1029 09:11:43.995901  332670 cri.go:89] found id: "4ab2230f580dbca69681a5d9a921b219bd4d7cef2d8ececb23fbd25a060866df"
	I1029 09:11:43.995906  332670 cri.go:89] found id: "74cc9b0ba8d30a66bc95714a4b556650efaf537c941eca3307d1d9e5161661b1"
	I1029 09:11:43.995914  332670 cri.go:89] found id: "d9d755902ee309db827c42914c9d11cc00e2c96ca199f264674c43e17f1a58bb"
	I1029 09:11:43.995918  332670 cri.go:89] found id: ""
	I1029 09:11:43.995966  332670 ssh_runner.go:195] Run: sudo runc list -f json
	W1029 09:11:44.008633  332670 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T09:11:44Z" level=error msg="open /run/runc: no such file or directory"
	I1029 09:11:44.008718  332670 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:11:44.017274  332670 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:11:44.017294  332670 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:11:44.017345  332670 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:11:44.025333  332670 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:11:44.025758  332670 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-259430" does not appear in /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:11:44.025878  332670 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-3727/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-259430" cluster setting kubeconfig missing "newest-cni-259430" context setting]
	I1029 09:11:44.026174  332670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:44.027451  332670 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:11:44.035974  332670 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1029 09:11:44.036039  332670 kubeadm.go:602] duration metric: took 18.736631ms to restartPrimaryControlPlane
	I1029 09:11:44.036054  332670 kubeadm.go:403] duration metric: took 70.654266ms to StartCluster
	I1029 09:11:44.036077  332670 settings.go:142] acquiring lock: {Name:mk07eebd81bddcab3dc3d429be8b09770a1732f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:44.036154  332670 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:11:44.036756  332670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-3727/kubeconfig: {Name:mkb38fa7a63145629aec7e985c233206ce03d2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:11:44.037049  332670 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:11:44.037175  332670 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:11:44.037286  332670 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-259430"
	I1029 09:11:44.037306  332670 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-259430"
	I1029 09:11:44.037309  332670 config.go:182] Loaded profile config "newest-cni-259430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:11:44.037312  332670 addons.go:70] Setting dashboard=true in profile "newest-cni-259430"
	I1029 09:11:44.037323  332670 addons.go:70] Setting default-storageclass=true in profile "newest-cni-259430"
	I1029 09:11:44.037333  332670 addons.go:239] Setting addon dashboard=true in "newest-cni-259430"
	I1029 09:11:44.037340  332670 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-259430"
	W1029 09:11:44.037315  332670 addons.go:248] addon storage-provisioner should already be in state true
	I1029 09:11:44.037388  332670 host.go:66] Checking if "newest-cni-259430" exists ...
	W1029 09:11:44.037357  332670 addons.go:248] addon dashboard should already be in state true
	I1029 09:11:44.037470  332670 host.go:66] Checking if "newest-cni-259430" exists ...
	I1029 09:11:44.037700  332670 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:44.037862  332670 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:44.037943  332670 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:44.039500  332670 out.go:179] * Verifying Kubernetes components...
	I1029 09:11:44.040886  332670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:11:44.065220  332670 addons.go:239] Setting addon default-storageclass=true in "newest-cni-259430"
	W1029 09:11:44.065247  332670 addons.go:248] addon default-storageclass should already be in state true
	I1029 09:11:44.065275  332670 host.go:66] Checking if "newest-cni-259430" exists ...
	I1029 09:11:44.065727  332670 cli_runner.go:164] Run: docker container inspect newest-cni-259430 --format={{.State.Status}}
	I1029 09:11:44.066837  332670 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1029 09:11:44.066837  332670 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:11:44.068425  332670 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:11:44.068463  332670 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:11:44.068472  332670 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1029 09:11:44.068533  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:44.069772  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1029 09:11:44.069805  332670 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1029 09:11:44.069870  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:44.096760  332670 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:11:44.096785  332670 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:11:44.096849  332670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-259430
	I1029 09:11:44.106721  332670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:44.110167  332670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:44.123868  332670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/newest-cni-259430/id_rsa Username:docker}
	I1029 09:11:44.197667  332670 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:11:44.212755  332670 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:11:44.212855  332670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:11:44.223803  332670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:11:44.225610  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1029 09:11:44.225633  332670 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1029 09:11:44.228099  332670 api_server.go:72] duration metric: took 191.015519ms to wait for apiserver process to appear ...
	I1029 09:11:44.228121  332670 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:11:44.228145  332670 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:11:44.237189  332670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:11:44.242330  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1029 09:11:44.242356  332670 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1029 09:11:44.260059  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1029 09:11:44.260104  332670 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1029 09:11:44.281398  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1029 09:11:44.281423  332670 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1029 09:11:44.299849  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1029 09:11:44.299876  332670 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1029 09:11:44.314553  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1029 09:11:44.314575  332670 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1029 09:11:44.328677  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1029 09:11:44.328703  332670 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1029 09:11:44.342209  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1029 09:11:44.342238  332670 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1029 09:11:44.355907  332670 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:11:44.355933  332670 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1029 09:11:44.369828  332670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1029 09:11:45.398884  332670 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1029 09:11:45.398927  332670 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1029 09:11:45.398966  332670 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:11:45.408784  332670 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1029 09:11:45.408813  332670 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1029 09:11:45.729098  332670 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:11:45.733626  332670 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:11:45.733662  332670 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:11:45.927795  332670 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.703955868s)
	I1029 09:11:45.927850  332670 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.690634134s)
	I1029 09:11:45.927910  332670 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.558046287s)
	I1029 09:11:45.929525  332670 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-259430 addons enable metrics-server
	
	I1029 09:11:45.939749  332670 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1029 09:11:45.941261  332670 addons.go:515] duration metric: took 1.904092997s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1029 09:11:46.229071  332670 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:11:46.233856  332670 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:11:46.233891  332670 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:11:46.729203  332670 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1029 09:11:46.734256  332670 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1029 09:11:46.735487  332670 api_server.go:141] control plane version: v1.34.1
	I1029 09:11:46.735512  332670 api_server.go:131] duration metric: took 2.507384146s to wait for apiserver health ...
	I1029 09:11:46.735521  332670 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:11:46.739506  332670 system_pods.go:59] 8 kube-system pods found
	I1029 09:11:46.739543  332670 system_pods.go:61] "coredns-66bc5c9577-k74f5" [d32eecf7-613f-43fe-87b6-1c56dc6f7837] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1029 09:11:46.739553  332670 system_pods.go:61] "etcd-newest-cni-259430" [21bef91b-1e23-4c0b-836a-7d38dbcd158d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:11:46.739563  332670 system_pods.go:61] "kindnet-4555c" [e9503ed8-3583-471b-8ed2-cb19fa55932f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1029 09:11:46.739569  332670 system_pods.go:61] "kube-apiserver-newest-cni-259430" [e2aa2d83-bd57-4b42-9f74-cc369442fb48] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:11:46.739578  332670 system_pods.go:61] "kube-controller-manager-newest-cni-259430" [c8b1f927-8450-4b3d-8380-0d74388f7b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:11:46.739583  332670 system_pods.go:61] "kube-proxy-md8mn" [5b216c8f-e72c-44bd-ac4a-4f07213f90bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:11:46.739597  332670 system_pods.go:61] "kube-scheduler-newest-cni-259430" [6dffb3f4-a5a2-456f-bfe4-34c2a0916645] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:11:46.739619  332670 system_pods.go:61] "storage-provisioner" [b614d976-a2b2-4dff-9276-58ac33de3f70] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1029 09:11:46.739631  332670 system_pods.go:74] duration metric: took 4.102287ms to wait for pod list to return data ...
	I1029 09:11:46.739642  332670 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:11:46.742381  332670 default_sa.go:45] found service account: "default"
	I1029 09:11:46.742410  332670 default_sa.go:55] duration metric: took 2.76086ms for default service account to be created ...
	I1029 09:11:46.742426  332670 kubeadm.go:587] duration metric: took 2.70534646s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1029 09:11:46.742455  332670 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:11:46.745725  332670 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1029 09:11:46.745760  332670 node_conditions.go:123] node cpu capacity is 8
	I1029 09:11:46.745776  332670 node_conditions.go:105] duration metric: took 3.311056ms to run NodePressure ...
	I1029 09:11:46.745791  332670 start.go:242] waiting for startup goroutines ...
	I1029 09:11:46.745801  332670 start.go:247] waiting for cluster config update ...
	I1029 09:11:46.745818  332670 start.go:256] writing updated cluster config ...
	I1029 09:11:46.746138  332670 ssh_runner.go:195] Run: rm -f paused
	I1029 09:11:46.802160  332670 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 09:11:46.803803  332670 out.go:179] * Done! kubectl is now configured to use "newest-cni-259430" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.671316898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.674345459Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=bf7befaa-313b-42c3-8b2c-8866b93460dc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.675055158Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1e0e615f-5dbf-4e5e-954a-74656e291f75 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.675907592Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.676347448Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.676806637Z" level=info msg="Ran pod sandbox 3a541d035871e950f8ff5752ebd73a3a7b52a37040ffebffba23fed7ab51cc7b with infra container: kube-system/kindnet-4555c/POD" id=bf7befaa-313b-42c3-8b2c-8866b93460dc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.676878251Z" level=info msg="Ran pod sandbox 665cd2bd6c5ee4e7103fc7eb94b92fc96af4d011437dd3d05d1246e55bc0c848 with infra container: kube-system/kube-proxy-md8mn/POD" id=1e0e615f-5dbf-4e5e-954a-74656e291f75 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.677954828Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0e5bd556-6b77-489d-afcb-3d774eab1384 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.677979595Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=0418c405-d8ca-4cea-9a86-77b7bb37a73a name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.678926285Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7a41f3e3-aaf7-41ab-99af-485e0cbdef03 name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.678949639Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=1ef63b1b-1853-4326-a714-f6f79b067f0e name=/runtime.v1.ImageService/ImageStatus
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.680095649Z" level=info msg="Creating container: kube-system/kindnet-4555c/kindnet-cni" id=29b725ce-6a19-4efa-8c33-f73aaaeccc3c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.680198844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.680098694Z" level=info msg="Creating container: kube-system/kube-proxy-md8mn/kube-proxy" id=c4a880ee-3e7b-4372-ac62-5ce3950616de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.680451989Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.684792009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.685358159Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.687457468Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.688029336Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.713750828Z" level=info msg="Created container ed1be3dde08f42b430597b1029ed66daa0d8f54f49214564bd4e8923ad921eea: kube-system/kindnet-4555c/kindnet-cni" id=29b725ce-6a19-4efa-8c33-f73aaaeccc3c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.714400073Z" level=info msg="Starting container: ed1be3dde08f42b430597b1029ed66daa0d8f54f49214564bd4e8923ad921eea" id=1bc80fa6-267f-4d65-8c04-26839de8c199 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.716097473Z" level=info msg="Started container" PID=1044 containerID=ed1be3dde08f42b430597b1029ed66daa0d8f54f49214564bd4e8923ad921eea description=kube-system/kindnet-4555c/kindnet-cni id=1bc80fa6-267f-4d65-8c04-26839de8c199 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a541d035871e950f8ff5752ebd73a3a7b52a37040ffebffba23fed7ab51cc7b
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.716615084Z" level=info msg="Created container 9379d9fd0f7b6aca31b8d7be1960ce0c4a30429b454b9e473233044fb3e049fe: kube-system/kube-proxy-md8mn/kube-proxy" id=c4a880ee-3e7b-4372-ac62-5ce3950616de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.717293917Z" level=info msg="Starting container: 9379d9fd0f7b6aca31b8d7be1960ce0c4a30429b454b9e473233044fb3e049fe" id=4600b622-7197-48c3-9d67-f879d70e1a03 name=/runtime.v1.RuntimeService/StartContainer
	Oct 29 09:11:46 newest-cni-259430 crio[518]: time="2025-10-29T09:11:46.719947186Z" level=info msg="Started container" PID=1045 containerID=9379d9fd0f7b6aca31b8d7be1960ce0c4a30429b454b9e473233044fb3e049fe description=kube-system/kube-proxy-md8mn/kube-proxy id=4600b622-7197-48c3-9d67-f879d70e1a03 name=/runtime.v1.RuntimeService/StartContainer sandboxID=665cd2bd6c5ee4e7103fc7eb94b92fc96af4d011437dd3d05d1246e55bc0c848
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9379d9fd0f7b6       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 seconds ago       Running             kube-proxy                1                   665cd2bd6c5ee       kube-proxy-md8mn                            kube-system
	ed1be3dde08f4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   3a541d035871e       kindnet-4555c                               kube-system
	f3e3a0ed6603e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   081e95130ca23       etcd-newest-cni-259430                      kube-system
	4ab2230f580db       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   e8f091858a617       kube-controller-manager-newest-cni-259430   kube-system
	74cc9b0ba8d30       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   be993b1602e71       kube-apiserver-newest-cni-259430            kube-system
	d9d755902ee30       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   28d9cf116c054       kube-scheduler-newest-cni-259430            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-259430
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-259430
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=newest-cni-259430
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_11_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:11:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-259430
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:11:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:11:45 +0000   Wed, 29 Oct 2025 09:11:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:11:45 +0000   Wed, 29 Oct 2025 09:11:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:11:45 +0000   Wed, 29 Oct 2025 09:11:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 29 Oct 2025 09:11:45 +0000   Wed, 29 Oct 2025 09:11:15 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-259430
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                b0b59dc6-8cfb-44ff-8492-2c787c88523a
	  Boot ID:                    785edb17-746e-4427-98c9-d59846dae5bf
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-259430                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-4555c                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-259430             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-259430    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-md8mn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-259430             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node newest-cni-259430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node newest-cni-259430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node newest-cni-259430 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node newest-cni-259430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-259430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node newest-cni-259430 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node newest-cni-259430 event: Registered Node newest-cni-259430 in Controller
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-259430 event: Registered Node newest-cni-259430 in Controller
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: 76 8c 8f f0 6d f3 e6 30 66 5b e9 02 08 00
	[Oct29 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[  +7.860471] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea d1 bc 55 fa d3 08 06
	[  +0.057230] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[  +7.379065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff d2 86 de 12 5c b8 08 06
	[  +0.000481] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ce 53 bf 9c eb 08 06
	[Oct29 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 3e ab f0 10 3c 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 16 ea 00 8c 1d 08 06
	[  +4.650960] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 a3 db 56 0e bd 08 06
	[  +0.000357] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 98 10 a5 48 77 08 06
	[ +10.158654] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	[Oct29 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 62 44 12 a2 6e 08 06
	[  +0.000472] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 a7 64 4b d4 84 08 06
	
	
	==> etcd [f3e3a0ed6603e3856e3d00a3ba9ea0a088ec7378c1ab94c9e4092df6c8e5ce5c] <==
	{"level":"warn","ts":"2025-10-29T09:11:44.761028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.767556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.776133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.782469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.788982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.794979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.802121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.810316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.823868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.833181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.840392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.847041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.853631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.861110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.867948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.875079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.882627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.890510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.897034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.905266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.911585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.924022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.937392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.943485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:11:44.989196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38876","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:11:52 up 54 min,  0 user,  load average: 3.52, 3.98, 2.66
	Linux newest-cni-259430 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ed1be3dde08f42b430597b1029ed66daa0d8f54f49214564bd4e8923ad921eea] <==
	I1029 09:11:46.890452       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1029 09:11:46.984613       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1029 09:11:46.984761       1 main.go:148] setting mtu 1500 for CNI 
	I1029 09:11:46.984777       1 main.go:178] kindnetd IP family: "ipv4"
	I1029 09:11:46.984816       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-29T09:11:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1029 09:11:47.186337       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1029 09:11:47.186375       1 controller.go:381] "Waiting for informer caches to sync"
	I1029 09:11:47.186390       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1029 09:11:47.187205       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1029 09:11:47.586692       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1029 09:11:47.586724       1 metrics.go:72] Registering metrics
	I1029 09:11:47.586815       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [74cc9b0ba8d30a66bc95714a4b556650efaf537c941eca3307d1d9e5161661b1] <==
	I1029 09:11:45.481487       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1029 09:11:45.481737       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1029 09:11:45.481763       1 policy_source.go:240] refreshing policies
	I1029 09:11:45.485972       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:11:45.495769       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:11:45.497886       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 09:11:45.498134       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1029 09:11:45.498231       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:11:45.498247       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:11:45.498256       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:11:45.498263       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:11:45.501914       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1029 09:11:45.508895       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:11:45.739281       1 controller.go:667] quota admission added evaluator for: namespaces
	I1029 09:11:45.778737       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:11:45.802768       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:11:45.810796       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:11:45.819051       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:11:45.856734       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.188.226"}
	I1029 09:11:45.871449       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.197.141"}
	I1029 09:11:46.384609       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:11:48.817044       1 controller.go:667] quota admission added evaluator for: endpoints
	I1029 09:11:49.166880       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:11:49.266464       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:11:49.266506       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4ab2230f580dbca69681a5d9a921b219bd4d7cef2d8ececb23fbd25a060866df] <==
	I1029 09:11:48.813116       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1029 09:11:48.813147       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1029 09:11:48.813164       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:11:48.813170       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:11:48.813190       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1029 09:11:48.813202       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:11:48.813214       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:11:48.813301       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 09:11:48.813451       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 09:11:48.813470       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:11:48.813470       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 09:11:48.814903       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:11:48.814921       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:11:48.814930       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:11:48.815272       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:11:48.818229       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 09:11:48.818244       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:11:48.818307       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 09:11:48.820573       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1029 09:11:48.823565       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1029 09:11:48.825154       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-259430"
	I1029 09:11:48.825250       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1029 09:11:48.826059       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1029 09:11:48.829309       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:11:48.842816       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9379d9fd0f7b6aca31b8d7be1960ce0c4a30429b454b9e473233044fb3e049fe] <==
	I1029 09:11:46.758022       1 server_linux.go:53] "Using iptables proxy"
	I1029 09:11:46.831600       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:11:46.932498       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:11:46.932541       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1029 09:11:46.932684       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:11:46.953120       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1029 09:11:46.953194       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:11:46.958685       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:11:46.959084       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:11:46.959124       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:11:46.960275       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:11:46.960300       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:11:46.960368       1 config.go:200] "Starting service config controller"
	I1029 09:11:46.960379       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:11:46.960415       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:11:46.960433       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:11:46.960590       1 config.go:309] "Starting node config controller"
	I1029 09:11:46.960616       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:11:46.960624       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:11:47.060525       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 09:11:47.060544       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 09:11:47.060554       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d9d755902ee309db827c42914c9d11cc00e2c96ca199f264674c43e17f1a58bb] <==
	I1029 09:11:44.096780       1 serving.go:386] Generated self-signed cert in-memory
	W1029 09:11:45.415773       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1029 09:11:45.415825       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1029 09:11:45.415839       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1029 09:11:45.415850       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1029 09:11:45.437045       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:11:45.437081       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:11:45.440232       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:11:45.440284       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:11:45.440765       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:11:45.442746       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:11:45.540947       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: I1029 09:11:45.503162     672 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: I1029 09:11:45.503205     672 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: I1029 09:11:45.504165     672 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: E1029 09:11:45.514046     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-259430\" already exists" pod="kube-system/etcd-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: E1029 09:11:45.517184     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-259430\" already exists" pod="kube-system/kube-scheduler-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: E1029 09:11:45.517188     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-259430\" already exists" pod="kube-system/kube-apiserver-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: E1029 09:11:45.580765     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-259430\" already exists" pod="kube-system/kube-apiserver-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: I1029 09:11:45.580809     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: E1029 09:11:45.588629     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-259430\" already exists" pod="kube-system/kube-controller-manager-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: I1029 09:11:45.588668     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: E1029 09:11:45.595612     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-259430\" already exists" pod="kube-system/kube-scheduler-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: I1029 09:11:45.595649     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-259430"
	Oct 29 09:11:45 newest-cni-259430 kubelet[672]: E1029 09:11:45.602051     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-259430\" already exists" pod="kube-system/etcd-newest-cni-259430"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: I1029 09:11:46.328309     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-259430"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: E1029 09:11:46.335768     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-259430\" already exists" pod="kube-system/kube-controller-manager-newest-cni-259430"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: I1029 09:11:46.362758     672 apiserver.go:52] "Watching apiserver"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: I1029 09:11:46.464952     672 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: I1029 09:11:46.508858     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b216c8f-e72c-44bd-ac4a-4f07213f90bb-xtables-lock\") pod \"kube-proxy-md8mn\" (UID: \"5b216c8f-e72c-44bd-ac4a-4f07213f90bb\") " pod="kube-system/kube-proxy-md8mn"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: I1029 09:11:46.508918     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9503ed8-3583-471b-8ed2-cb19fa55932f-lib-modules\") pod \"kindnet-4555c\" (UID: \"e9503ed8-3583-471b-8ed2-cb19fa55932f\") " pod="kube-system/kindnet-4555c"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: I1029 09:11:46.508967     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e9503ed8-3583-471b-8ed2-cb19fa55932f-cni-cfg\") pod \"kindnet-4555c\" (UID: \"e9503ed8-3583-471b-8ed2-cb19fa55932f\") " pod="kube-system/kindnet-4555c"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: I1029 09:11:46.509039     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9503ed8-3583-471b-8ed2-cb19fa55932f-xtables-lock\") pod \"kindnet-4555c\" (UID: \"e9503ed8-3583-471b-8ed2-cb19fa55932f\") " pod="kube-system/kindnet-4555c"
	Oct 29 09:11:46 newest-cni-259430 kubelet[672]: I1029 09:11:46.509061     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b216c8f-e72c-44bd-ac4a-4f07213f90bb-lib-modules\") pod \"kube-proxy-md8mn\" (UID: \"5b216c8f-e72c-44bd-ac4a-4f07213f90bb\") " pod="kube-system/kube-proxy-md8mn"
	Oct 29 09:11:47 newest-cni-259430 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 29 09:11:47 newest-cni-259430 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 29 09:11:47 newest-cni-259430 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-259430 -n newest-cni-259430
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-259430 -n newest-cni-259430: exit status 2 (340.741539ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-259430 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-k74f5 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-kb7r8 kubernetes-dashboard-855c9754f9-trgmp
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-259430 describe pod coredns-66bc5c9577-k74f5 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-kb7r8 kubernetes-dashboard-855c9754f9-trgmp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-259430 describe pod coredns-66bc5c9577-k74f5 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-kb7r8 kubernetes-dashboard-855c9754f9-trgmp: exit status 1 (62.507038ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-k74f5" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-kb7r8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-trgmp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-259430 describe pod coredns-66bc5c9577-k74f5 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-kb7r8 kubernetes-dashboard-855c9754f9-trgmp: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.99s)

                                                
                                    

Test pass (263/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.46
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 4.87
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.41
21 TestBinaryMirror 0.83
22 TestOffline 63.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 147.6
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 7.44
48 TestAddons/StoppedEnableDisable 16.8
49 TestCertOptions 30.61
50 TestCertExpiration 218.81
52 TestForceSystemdFlag 32.15
53 TestForceSystemdEnv 25.7
58 TestErrorSpam/setup 19.99
59 TestErrorSpam/start 0.69
60 TestErrorSpam/status 0.98
61 TestErrorSpam/pause 6.2
62 TestErrorSpam/unpause 6.29
63 TestErrorSpam/stop 2.65
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 41.75
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 8.74
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.72
75 TestFunctional/serial/CacheCmd/cache/add_local 1.19
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 44.42
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.24
86 TestFunctional/serial/LogsFileCmd 1.28
87 TestFunctional/serial/InvalidService 4.62
89 TestFunctional/parallel/ConfigCmd 0.41
90 TestFunctional/parallel/DashboardCmd 8.76
91 TestFunctional/parallel/DryRun 0.39
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 0.99
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 24.04
101 TestFunctional/parallel/SSHCmd 0.8
102 TestFunctional/parallel/CpCmd 2.11
103 TestFunctional/parallel/MySQL 15.4
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 2
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
113 TestFunctional/parallel/License 0.48
114 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.65
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 2.2
121 TestFunctional/parallel/ImageCommands/Setup 0.98
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
123 TestFunctional/parallel/Version/short 0.07
124 TestFunctional/parallel/Version/components 0.61
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.28
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
137 TestFunctional/parallel/MountCmd/any-port 8.19
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/MountCmd/specific-port 2.1
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.53
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
148 TestFunctional/parallel/ProfileCmd/profile_list 0.42
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
150 TestFunctional/parallel/ServiceCmd/List 1.74
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.73
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 126.5
163 TestMultiControlPlane/serial/DeployApp 4.09
164 TestMultiControlPlane/serial/PingHostFromPods 1.11
165 TestMultiControlPlane/serial/AddWorkerNode 53.98
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.94
168 TestMultiControlPlane/serial/CopyFile 18.14
169 TestMultiControlPlane/serial/StopSecondaryNode 19.95
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.26
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.98
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 104.32
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.61
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
176 TestMultiControlPlane/serial/StopCluster 41.84
177 TestMultiControlPlane/serial/RestartCluster 56.46
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
179 TestMultiControlPlane/serial/AddSecondaryNode 61.28
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
185 TestJSONOutput/start/Command 40.95
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.21
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 29.53
211 TestKicCustomNetwork/use_default_bridge_network 24.49
212 TestKicExistingNetwork 26.82
213 TestKicCustomSubnet 24.97
214 TestKicStaticIP 28.76
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 54.93
219 TestMountStart/serial/StartWithMountFirst 9
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 5.35
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.76
224 TestMountStart/serial/VerifyMountPostDelete 0.29
225 TestMountStart/serial/Stop 1.27
226 TestMountStart/serial/RestartStopped 7.35
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 93.36
231 TestMultiNode/serial/DeployApp2Nodes 3.5
232 TestMultiNode/serial/PingHostFrom2Pods 0.73
233 TestMultiNode/serial/AddNode 56.73
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.69
236 TestMultiNode/serial/CopyFile 10.12
237 TestMultiNode/serial/StopNode 2.31
238 TestMultiNode/serial/StartAfterStop 7.35
239 TestMultiNode/serial/RestartKeepsNodes 73.98
240 TestMultiNode/serial/DeleteNode 5.35
241 TestMultiNode/serial/StopMultiNode 28.66
242 TestMultiNode/serial/RestartMultiNode 50.78
243 TestMultiNode/serial/ValidateNameConflict 24.33
248 TestPreload 109.81
250 TestScheduledStopUnix 97.42
253 TestInsufficientStorage 9.73
254 TestRunningBinaryUpgrade 70.9
256 TestKubernetesUpgrade 304.04
257 TestMissingContainerUpgrade 72.67
258 TestStoppedBinaryUpgrade/Setup 0.58
260 TestPause/serial/Start 90.44
261 TestStoppedBinaryUpgrade/Upgrade 60.62
262 TestStoppedBinaryUpgrade/MinikubeLogs 1.1
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
265 TestNoKubernetes/serial/StartWithK8s 29.53
273 TestNetworkPlugins/group/false 4.92
277 TestPause/serial/SecondStartNoReconfiguration 7.52
278 TestNoKubernetes/serial/StartWithStopK8s 10.5
280 TestNoKubernetes/serial/Start 6.91
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
282 TestNoKubernetes/serial/ProfileList 2.01
290 TestNoKubernetes/serial/Stop 1.37
291 TestNoKubernetes/serial/StartNoArgs 7.48
292 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
293 TestNetworkPlugins/group/auto/Start 69.35
294 TestNetworkPlugins/group/kindnet/Start 69.95
295 TestNetworkPlugins/group/auto/KubeletFlags 0.35
296 TestNetworkPlugins/group/auto/NetCatPod 9.21
297 TestNetworkPlugins/group/auto/DNS 0.11
298 TestNetworkPlugins/group/auto/Localhost 0.09
299 TestNetworkPlugins/group/auto/HairPin 0.09
300 TestNetworkPlugins/group/calico/Start 50.83
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
303 TestNetworkPlugins/group/kindnet/NetCatPod 9.27
304 TestNetworkPlugins/group/kindnet/DNS 0.13
305 TestNetworkPlugins/group/kindnet/Localhost 0.11
306 TestNetworkPlugins/group/kindnet/HairPin 0.11
307 TestNetworkPlugins/group/calico/ControllerPod 6.01
308 TestNetworkPlugins/group/custom-flannel/Start 53.27
309 TestNetworkPlugins/group/calico/KubeletFlags 0.33
310 TestNetworkPlugins/group/calico/NetCatPod 10.19
311 TestNetworkPlugins/group/calico/DNS 0.13
312 TestNetworkPlugins/group/calico/Localhost 0.1
313 TestNetworkPlugins/group/calico/HairPin 0.1
314 TestNetworkPlugins/group/enable-default-cni/Start 71.69
315 TestNetworkPlugins/group/flannel/Start 49.02
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.19
318 TestNetworkPlugins/group/custom-flannel/DNS 0.11
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
321 TestNetworkPlugins/group/bridge/Start 67.72
322 TestNetworkPlugins/group/flannel/ControllerPod 6.01
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
324 TestNetworkPlugins/group/flannel/NetCatPod 9.19
325 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
326 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.22
327 TestNetworkPlugins/group/flannel/DNS 0.12
328 TestNetworkPlugins/group/flannel/Localhost 0.09
329 TestNetworkPlugins/group/flannel/HairPin 0.1
330 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
331 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
332 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
334 TestStartStop/group/old-k8s-version/serial/FirstStart 53.65
336 TestStartStop/group/no-preload/serial/FirstStart 54.98
338 TestStartStop/group/embed-certs/serial/FirstStart 45.75
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
340 TestNetworkPlugins/group/bridge/NetCatPod 9.23
341 TestNetworkPlugins/group/bridge/DNS 0.15
342 TestNetworkPlugins/group/bridge/Localhost 0.11
343 TestNetworkPlugins/group/bridge/HairPin 0.11
344 TestStartStop/group/old-k8s-version/serial/DeployApp 9.26
346 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40
347 TestStartStop/group/embed-certs/serial/DeployApp 8.27
348 TestStartStop/group/no-preload/serial/DeployApp 9.28
350 TestStartStop/group/old-k8s-version/serial/Stop 16.18
353 TestStartStop/group/embed-certs/serial/Stop 18.08
354 TestStartStop/group/no-preload/serial/Stop 16.38
355 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
356 TestStartStop/group/old-k8s-version/serial/SecondStart 48.72
357 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
358 TestStartStop/group/embed-certs/serial/SecondStart 48.69
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
360 TestStartStop/group/no-preload/serial/SecondStart 49.65
361 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.34
363 TestStartStop/group/default-k8s-diff-port/serial/Stop 17.03
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
365 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 44.19
366 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
367 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
368 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
369 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
370 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
372 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.07
373 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.08
374 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
376 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
379 TestStartStop/group/newest-cni/serial/FirstStart 27.08
380 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
381 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
382 TestStartStop/group/newest-cni/serial/DeployApp 0
384 TestStartStop/group/newest-cni/serial/Stop 7.96
385 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
387 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
388 TestStartStop/group/newest-cni/serial/SecondStart 10.47
389 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
390 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
x
+
TestDownloadOnly/v1.28.0/json-events (4.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-682324 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-682324 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.462370304s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1029 08:20:02.467496    7218 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1029 08:20:02.467594    7218 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-682324
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-682324: exit status 85 (78.071258ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-682324 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-682324 │ jenkins │ v1.37.0 │ 29 Oct 25 08:19 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 08:19:58
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 08:19:58.061253    7230 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:19:58.061496    7230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:19:58.061508    7230 out.go:374] Setting ErrFile to fd 2...
	I1029 08:19:58.061514    7230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:19:58.061781    7230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	W1029 08:19:58.061960    7230 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21800-3727/.minikube/config/config.json: open /home/jenkins/minikube-integration/21800-3727/.minikube/config/config.json: no such file or directory
	I1029 08:19:58.062534    7230 out.go:368] Setting JSON to true
	I1029 08:19:58.063470    7230 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":146,"bootTime":1761725852,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 08:19:58.063575    7230 start.go:143] virtualization: kvm guest
	I1029 08:19:58.065549    7230 out.go:99] [download-only-682324] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 08:19:58.065704    7230 notify.go:221] Checking for updates...
	W1029 08:19:58.065752    7230 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball: no such file or directory
	I1029 08:19:58.066846    7230 out.go:171] MINIKUBE_LOCATION=21800
	I1029 08:19:58.067963    7230 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:19:58.069266    7230 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 08:19:58.070549    7230 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 08:19:58.071817    7230 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1029 08:19:58.074250    7230 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1029 08:19:58.074530    7230 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:19:58.099276    7230 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 08:19:58.099365    7230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:19:58.516081    7230 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-29 08:19:58.505109209 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 08:19:58.516194    7230 docker.go:319] overlay module found
	I1029 08:19:58.517617    7230 out.go:99] Using the docker driver based on user configuration
	I1029 08:19:58.517644    7230 start.go:309] selected driver: docker
	I1029 08:19:58.517650    7230 start.go:930] validating driver "docker" against <nil>
	I1029 08:19:58.517735    7230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:19:58.582970    7230 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-29 08:19:58.570438465 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 08:19:58.583155    7230 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 08:19:58.583661    7230 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1029 08:19:58.583814    7230 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1029 08:19:58.585155    7230 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-682324 host does not exist
	  To start a cluster, run: "minikube start -p download-only-682324"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-682324
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-360816 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-360816 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.868131983s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1029 08:20:07.807372    7218 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1029 08:20:07.807419    7218 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-3727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-360816
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-360816: exit status 85 (76.599061ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-682324 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-682324 │ jenkins │ v1.37.0 │ 29 Oct 25 08:19 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ delete  │ -p download-only-682324                                                                                                                                                   │ download-only-682324 │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │ 29 Oct 25 08:20 UTC │
	│ start   │ -o=json --download-only -p download-only-360816 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-360816 │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 08:20:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 08:20:02.989596    7583 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:20:02.989704    7583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:20:02.989710    7583 out.go:374] Setting ErrFile to fd 2...
	I1029 08:20:02.989714    7583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:20:02.989887    7583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:20:02.990392    7583 out.go:368] Setting JSON to true
	I1029 08:20:02.991209    7583 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":151,"bootTime":1761725852,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 08:20:02.991290    7583 start.go:143] virtualization: kvm guest
	I1029 08:20:02.993033    7583 out.go:99] [download-only-360816] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 08:20:02.993177    7583 notify.go:221] Checking for updates...
	I1029 08:20:02.994723    7583 out.go:171] MINIKUBE_LOCATION=21800
	I1029 08:20:02.996328    7583 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:20:02.997901    7583 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 08:20:02.999141    7583 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 08:20:03.000580    7583 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1029 08:20:03.003504    7583 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1029 08:20:03.003819    7583 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:20:03.028758    7583 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 08:20:03.028869    7583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:20:03.086538    7583 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-29 08:20:03.076503218 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 08:20:03.086645    7583 docker.go:319] overlay module found
	I1029 08:20:03.088161    7583 out.go:99] Using the docker driver based on user configuration
	I1029 08:20:03.088182    7583 start.go:309] selected driver: docker
	I1029 08:20:03.088187    7583 start.go:930] validating driver "docker" against <nil>
	I1029 08:20:03.088258    7583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:20:03.146297    7583 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-29 08:20:03.136925181 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 08:20:03.146505    7583 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 08:20:03.147092    7583 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1029 08:20:03.147264    7583 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1029 08:20:03.149212    7583 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-360816 host does not exist
	  To start a cluster, run: "minikube start -p download-only-360816"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-360816
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-695934 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-695934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-695934
--- PASS: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.83s)

                                                
                                                
=== RUN   TestBinaryMirror
I1029 08:20:08.960839    7218 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-110152 --alsologtostderr --binary-mirror http://127.0.0.1:45439 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-110152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-110152
--- PASS: TestBinaryMirror (0.83s)

                                                
                                    
x
+
TestOffline (63.63s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-448600 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-448600 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m1.027459505s)
helpers_test.go:175: Cleaning up "offline-crio-448600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-448600
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-448600: (2.604532654s)
--- PASS: TestOffline (63.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-306574
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-306574: exit status 85 (61.596051ms)

                                                
                                                
-- stdout --
	* Profile "addons-306574" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-306574"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-306574
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-306574: exit status 85 (62.162327ms)

                                                
                                                
-- stdout --
	* Profile "addons-306574" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-306574"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (147.6s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-306574 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-306574 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m27.594997144s)
--- PASS: TestAddons/Setup (147.60s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-306574 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-306574 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-306574 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-306574 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1e2d92de-7eba-4d7c-b287-b5b5f0ea39a2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1e2d92de-7eba-4d7c-b287-b5b5f0ea39a2] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003551283s
addons_test.go:694: (dbg) Run:  kubectl --context addons-306574 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-306574 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-306574 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.8s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-306574
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-306574: (16.512308747s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-306574
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-306574
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-306574
--- PASS: TestAddons/StoppedEnableDisable (16.80s)

                                                
                                    
x
+
TestCertOptions (30.61s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-369560 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-369560 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (27.14182785s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-369560 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-369560 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-369560 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-369560" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-369560
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-369560: (2.71676386s)
--- PASS: TestCertOptions (30.61s)

                                                
                                    
x
+
TestCertExpiration (218.81s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-230123 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-230123 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (29.165085954s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-230123 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-230123 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.954512168s)
helpers_test.go:175: Cleaning up "cert-expiration-230123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-230123
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-230123: (2.690193737s)
--- PASS: TestCertExpiration (218.81s)

                                                
                                    
x
+
TestForceSystemdFlag (32.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-699681 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-699681 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.165295425s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-699681 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-699681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-699681
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-699681: (3.611193998s)
--- PASS: TestForceSystemdFlag (32.15s)

                                                
                                    
x
+
TestForceSystemdEnv (25.7s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-317579 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-317579 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.066532408s)
helpers_test.go:175: Cleaning up "force-systemd-env-317579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-317579
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-317579: (2.636516487s)
--- PASS: TestForceSystemdEnv (25.70s)

                                                
                                    
x
+
TestErrorSpam/setup (19.99s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-305578 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-305578 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-305578 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-305578 --driver=docker  --container-runtime=crio: (19.993670051s)
--- PASS: TestErrorSpam/setup (19.99s)

                                                
                                    
x
+
TestErrorSpam/start (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 start --dry-run
--- PASS: TestErrorSpam/start (0.69s)

                                                
                                    
x
+
TestErrorSpam/status (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 status
--- PASS: TestErrorSpam/status (0.98s)

                                                
                                    
x
+
TestErrorSpam/pause (6.2s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 pause: exit status 80 (2.128486935s)

                                                
                                                
-- stdout --
	* Pausing node nospam-305578 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:26:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 pause: exit status 80 (1.854063368s)

                                                
                                                
-- stdout --
	* Pausing node nospam-305578 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:26:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 pause: exit status 80 (2.215008304s)

                                                
                                                
-- stdout --
	* Pausing node nospam-305578 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:26:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.20s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.29s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 unpause: exit status 80 (1.929908104s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-305578 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:26:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 unpause: exit status 80 (2.101009592s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-305578 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:26:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 unpause: exit status 80 (2.26155722s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-305578 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-29T08:26:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.29s)

                                                
                                    
x
+
TestErrorSpam/stop (2.65s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 stop: (2.438714094s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305578 --log_dir /tmp/nospam-305578 stop
--- PASS: TestErrorSpam/stop (2.65s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21800-3727/.minikube/files/etc/test/nested/copy/7218/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.75s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-985165 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-985165 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.749030066s)
--- PASS: TestFunctional/serial/StartWithProxy (41.75s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (8.74s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1029 08:27:05.191349    7218 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-985165 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-985165 --alsologtostderr -v=8: (8.740879699s)
functional_test.go:678: soft start took 8.741621715s for "functional-985165" cluster.
I1029 08:27:13.932699    7218 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (8.74s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-985165 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-985165 /tmp/TestFunctionalserialCacheCmdcacheadd_local3305680834/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 cache add minikube-local-cache-test:functional-985165
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 cache delete minikube-local-cache-test:functional-985165
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-985165
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-985165 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (301.498056ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 kubectl -- --context functional-985165 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-985165 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.42s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-985165 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1029 08:27:38.048756    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:27:38.055206    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:27:38.066671    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:27:38.088165    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:27:38.129661    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:27:38.211141    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:27:38.372739    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:27:38.694501    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:27:39.336549    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:27:40.618119    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:27:43.181027    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:27:48.302615    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:27:58.544373    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-985165 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.422908117s)
functional_test.go:776: restart took 44.423023714s for "functional-985165" cluster.
I1029 08:28:04.842212    7218 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (44.42s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-985165 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-985165 logs: (1.24007944s)
--- PASS: TestFunctional/serial/LogsCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 logs --file /tmp/TestFunctionalserialLogsFileCmd1256525171/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-985165 logs --file /tmp/TestFunctionalserialLogsFileCmd1256525171/001/logs.txt: (1.280015523s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-985165 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-985165
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-985165: exit status 115 (351.544492ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32724 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-985165 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-985165 delete -f testdata/invalidsvc.yaml: (1.089380713s)
--- PASS: TestFunctional/serial/InvalidService (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-985165 config get cpus: exit status 14 (71.839355ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-985165 config get cpus: exit status 14 (63.522937ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-985165 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-985165 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 47134: os: process already finished
E1029 08:28:59.987806    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:30:21.909669    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:32:38.040442    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:33:05.751842    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:37:38.041214    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/DashboardCmd (8.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-985165 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-985165 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (164.971659ms)

                                                
                                                
-- stdout --
	* [functional-985165] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:28:35.157910   46730 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:28:35.158090   46730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:28:35.158098   46730 out.go:374] Setting ErrFile to fd 2...
	I1029 08:28:35.158104   46730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:28:35.158755   46730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:28:35.159290   46730 out.go:368] Setting JSON to false
	I1029 08:28:35.160374   46730 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":663,"bootTime":1761725852,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 08:28:35.160472   46730 start.go:143] virtualization: kvm guest
	I1029 08:28:35.162325   46730 out.go:179] * [functional-985165] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 08:28:35.163519   46730 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:28:35.163522   46730 notify.go:221] Checking for updates...
	I1029 08:28:35.165439   46730 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:28:35.166579   46730 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 08:28:35.167644   46730 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 08:28:35.168585   46730 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 08:28:35.169645   46730 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:28:35.171186   46730 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:28:35.171858   46730 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:28:35.196924   46730 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 08:28:35.197033   46730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:28:35.255227   46730 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-29 08:28:35.244561959 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 08:28:35.255328   46730 docker.go:319] overlay module found
	I1029 08:28:35.256829   46730 out.go:179] * Using the docker driver based on existing profile
	I1029 08:28:35.258136   46730 start.go:309] selected driver: docker
	I1029 08:28:35.258149   46730 start.go:930] validating driver "docker" against &{Name:functional-985165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-985165 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:28:35.258234   46730 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:28:35.259765   46730 out.go:203] 
	W1029 08:28:35.260781   46730 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1029 08:28:35.261928   46730 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-985165 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-985165 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-985165 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (180.485584ms)

                                                
                                                
-- stdout --
	* [functional-985165] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:28:33.987062   46281 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:28:33.987739   46281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:28:33.987754   46281 out.go:374] Setting ErrFile to fd 2...
	I1029 08:28:33.987761   46281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:28:33.988402   46281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:28:33.989380   46281 out.go:368] Setting JSON to false
	I1029 08:28:33.990544   46281 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":662,"bootTime":1761725852,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 08:28:33.990668   46281 start.go:143] virtualization: kvm guest
	I1029 08:28:33.992629   46281 out.go:179] * [functional-985165] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1029 08:28:33.994273   46281 notify.go:221] Checking for updates...
	I1029 08:28:33.994282   46281 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:28:33.995590   46281 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:28:33.996831   46281 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 08:28:33.998017   46281 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 08:28:33.999470   46281 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 08:28:34.000644   46281 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:28:34.002099   46281 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:28:34.002608   46281 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:28:34.031138   46281 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 08:28:34.031264   46281 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:28:34.094983   46281 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-29 08:28:34.083597133 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 08:28:34.095150   46281 docker.go:319] overlay module found
	I1029 08:28:34.096819   46281 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1029 08:28:34.098039   46281 start.go:309] selected driver: docker
	I1029 08:28:34.098057   46281 start.go:930] validating driver "docker" against &{Name:functional-985165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-985165 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:28:34.098191   46281 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:28:34.100105   46281 out.go:203] 
	W1029 08:28:34.101277   46281 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1029 08:28:34.102482   46281 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [3df575c2-deb9-4a38-b407-80473ca84f78] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004702309s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-985165 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-985165 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-985165 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-985165 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [fd382667-ed25-496d-b96e-2ecb517ad851] Pending
helpers_test.go:352: "sp-pod" [fd382667-ed25-496d-b96e-2ecb517ad851] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [fd382667-ed25-496d-b96e-2ecb517ad851] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004396853s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-985165 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-985165 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-985165 delete -f testdata/storage-provisioner/pod.yaml: (1.122213598s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-985165 apply -f testdata/storage-provisioner/pod.yaml
I1029 08:28:32.247934    7218 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [5388b54b-026c-4f1d-bb7c-9e401d55e894] Pending
helpers_test.go:352: "sp-pod" [5388b54b-026c-4f1d-bb7c-9e401d55e894] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [5388b54b-026c-4f1d-bb7c-9e401d55e894] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004481636s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-985165 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh -n functional-985165 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 cp functional-985165:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3360487174/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh -n functional-985165 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh -n functional-985165 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (15.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-985165 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-jqcfr" [2b483d81-0f0f-4bc1-a083-e9bcc13f0b6c] Pending
helpers_test.go:352: "mysql-5bb876957f-jqcfr" [2b483d81-0f0f-4bc1-a083-e9bcc13f0b6c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-jqcfr" [2b483d81-0f0f-4bc1-a083-e9bcc13f0b6c] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.004129957s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-985165 exec mysql-5bb876957f-jqcfr -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-985165 exec mysql-5bb876957f-jqcfr -- mysql -ppassword -e "show databases;": exit status 1 (91.080158ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1029 08:28:26.987609    7218 retry.go:31] will retry after 1.00596489s: exit status 1
I1029 08:28:27.994158    7218 kapi.go:150] Service nginx-svc in namespace default found.
functional_test.go:1812: (dbg) Run:  kubectl --context functional-985165 exec mysql-5bb876957f-jqcfr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (15.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/7218/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "sudo cat /etc/test/nested/copy/7218/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/7218.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "sudo cat /etc/ssl/certs/7218.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/7218.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "sudo cat /usr/share/ca-certificates/7218.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/72182.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "sudo cat /etc/ssl/certs/72182.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/72182.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "sudo cat /usr/share/ca-certificates/72182.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-985165 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-985165 ssh "sudo systemctl is-active docker": exit status 1 (319.242073ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-985165 ssh "sudo systemctl is-active containerd": exit status 1 (317.018276ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-985165 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-985165 image ls --format short --alsologtostderr:
I1029 08:28:40.649797   47704 out.go:360] Setting OutFile to fd 1 ...
I1029 08:28:40.650117   47704 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:28:40.650130   47704 out.go:374] Setting ErrFile to fd 2...
I1029 08:28:40.650136   47704 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:28:40.650448   47704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
I1029 08:28:40.651334   47704 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:28:40.651489   47704 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:28:40.652055   47704 cli_runner.go:164] Run: docker container inspect functional-985165 --format={{.State.Status}}
I1029 08:28:40.675319   47704 ssh_runner.go:195] Run: systemctl --version
I1029 08:28:40.675380   47704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-985165
I1029 08:28:40.698881   47704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/functional-985165/id_rsa Username:docker}
I1029 08:28:40.807402   47704 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image ls --format table --alsologtostderr
2025/10/29 08:28:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-985165 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/my-image                      │ functional-985165  │ 1c12165e1db75 │ 1.47MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ latest             │ 9d0e6f6199dcb │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-985165 image ls --format table --alsologtostderr:
I1029 08:28:43.942298   48424 out.go:360] Setting OutFile to fd 1 ...
I1029 08:28:43.942558   48424 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:28:43.942569   48424 out.go:374] Setting ErrFile to fd 2...
I1029 08:28:43.942573   48424 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:28:43.942741   48424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
I1029 08:28:43.943430   48424 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:28:43.943585   48424 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:28:43.944103   48424 cli_runner.go:164] Run: docker container inspect functional-985165 --format={{.State.Status}}
I1029 08:28:43.964605   48424 ssh_runner.go:195] Run: systemctl --version
I1029 08:28:43.964651   48424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-985165
I1029 08:28:43.982026   48424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/functional-985165/id_rsa Username:docker}
I1029 08:28:44.085760   48424 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-985165 image ls --format json --alsologtostderr:
[{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c
82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"4992b8b025346750ef960663a3ac6ae800ba2f6dc6e3727d57a7ceaffeb0692d","repoDigests":["docker.io/library/0ac347a9b222f4dee97dd15cccaa8c4328cbc582257f72a1fa0e68783f324450-tmp@sha256:2a83a07a23a702c3cd71947ede4a8eff2660780f0fdd8d62f8355434d5584518"],"repoTags":[],"size":"1466131"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:9dacca6749f2215cc3094f641c5b6662f7791e66a57ed034e806a7c48d51c18f"]
,"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha2
56:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"9d0e6f6199dcb6e045dad103064601d730fcfaf8d1bd357d969fb70bd5b90dec","repoDigests":["docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58","docker.io/library/nginx@sha256:b619c34a163ac12f68c1982568a122c4953dbf3126b8dbf0cc2f6fdbfd85de27"],"repoTags":["docker.io/library/nginx:latest"],"size":"155489797"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb
813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},
{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"1c12165e1db75c2478f27ba06888f550667ed11ab90d37aaee04622cf7693f31","repoDigests":["localhost/my-image@sha256:26f76ed82df0fba013f2aa12f3807c6f150761edec72d380afc18dcf4a56ff18"],"repoTags":["localhost/my-image:functional-985165"],"size":"1468743"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c01
45cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-985165 image ls --format json --alsologtostderr:
I1029 08:28:43.712428   48367 out.go:360] Setting OutFile to fd 1 ...
I1029 08:28:43.712678   48367 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:28:43.712688   48367 out.go:374] Setting ErrFile to fd 2...
I1029 08:28:43.712692   48367 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:28:43.712893   48367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
I1029 08:28:43.713476   48367 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:28:43.713568   48367 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:28:43.713906   48367 cli_runner.go:164] Run: docker container inspect functional-985165 --format={{.State.Status}}
I1029 08:28:43.732390   48367 ssh_runner.go:195] Run: systemctl --version
I1029 08:28:43.732453   48367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-985165
I1029 08:28:43.750799   48367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/functional-985165/id_rsa Username:docker}
I1029 08:28:43.851139   48367 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-985165 image ls --format yaml --alsologtostderr:
- id: beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1462480"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 1c12165e1db75c2478f27ba06888f550667ed11ab90d37aaee04622cf7693f31
repoDigests:
- localhost/my-image@sha256:26f76ed82df0fba013f2aa12f3807c6f150761edec72d380afc18dcf4a56ff18
repoTags:
- localhost/my-image:functional-985165
size: "1468743"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 4992b8b025346750ef960663a3ac6ae800ba2f6dc6e3727d57a7ceaffeb0692d
repoDigests:
- docker.io/library/0ac347a9b222f4dee97dd15cccaa8c4328cbc582257f72a1fa0e68783f324450-tmp@sha256:2a83a07a23a702c3cd71947ede4a8eff2660780f0fdd8d62f8355434d5584518
repoTags: []
size: "1466131"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:9dacca6749f2215cc3094f641c5b6662f7791e66a57ed034e806a7c48d51c18f
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9d0e6f6199dcb6e045dad103064601d730fcfaf8d1bd357d969fb70bd5b90dec
repoDigests:
- docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58
- docker.io/library/nginx@sha256:b619c34a163ac12f68c1982568a122c4953dbf3126b8dbf0cc2f6fdbfd85de27
repoTags:
- docker.io/library/nginx:latest
size: "155489797"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-985165 image ls --format yaml --alsologtostderr:
I1029 08:28:43.483553   48312 out.go:360] Setting OutFile to fd 1 ...
I1029 08:28:43.483809   48312 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:28:43.483819   48312 out.go:374] Setting ErrFile to fd 2...
I1029 08:28:43.483824   48312 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:28:43.484070   48312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
I1029 08:28:43.484626   48312 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:28:43.484711   48312 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:28:43.485083   48312 cli_runner.go:164] Run: docker container inspect functional-985165 --format={{.State.Status}}
I1029 08:28:43.503628   48312 ssh_runner.go:195] Run: systemctl --version
I1029 08:28:43.503676   48312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-985165
I1029 08:28:43.521489   48312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/functional-985165/id_rsa Username:docker}
I1029 08:28:43.620934   48312 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-985165 ssh pgrep buildkitd: exit status 1 (287.824692ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image build -t localhost/my-image:functional-985165 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-985165 image build -t localhost/my-image:functional-985165 testdata/build --alsologtostderr: (1.678334535s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-985165 image build -t localhost/my-image:functional-985165 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4992b8b0253
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-985165
--> 1c12165e1db
Successfully tagged localhost/my-image:functional-985165
1c12165e1db75c2478f27ba06888f550667ed11ab90d37aaee04622cf7693f31
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-985165 image build -t localhost/my-image:functional-985165 testdata/build --alsologtostderr:
I1029 08:28:41.570349   47896 out.go:360] Setting OutFile to fd 1 ...
I1029 08:28:41.570658   47896 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:28:41.570667   47896 out.go:374] Setting ErrFile to fd 2...
I1029 08:28:41.570671   47896 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:28:41.570865   47896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
I1029 08:28:41.571489   47896 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:28:41.572175   47896 config.go:182] Loaded profile config "functional-985165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:28:41.572639   47896 cli_runner.go:164] Run: docker container inspect functional-985165 --format={{.State.Status}}
I1029 08:28:41.590919   47896 ssh_runner.go:195] Run: systemctl --version
I1029 08:28:41.590973   47896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-985165
I1029 08:28:41.609227   47896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/functional-985165/id_rsa Username:docker}
I1029 08:28:41.708711   47896 build_images.go:162] Building image from path: /tmp/build.1251957683.tar
I1029 08:28:41.708787   47896 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1029 08:28:41.717382   47896 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1251957683.tar
I1029 08:28:41.721382   47896 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1251957683.tar: stat -c "%s %y" /var/lib/minikube/build/build.1251957683.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1251957683.tar': No such file or directory
I1029 08:28:41.721411   47896 ssh_runner.go:362] scp /tmp/build.1251957683.tar --> /var/lib/minikube/build/build.1251957683.tar (3072 bytes)
I1029 08:28:41.740608   47896 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1251957683
I1029 08:28:41.748921   47896 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1251957683 -xf /var/lib/minikube/build/build.1251957683.tar
I1029 08:28:41.757605   47896 crio.go:315] Building image: /var/lib/minikube/build/build.1251957683
I1029 08:28:41.757692   47896 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-985165 /var/lib/minikube/build/build.1251957683 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1029 08:28:43.168777   47896 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-985165 /var/lib/minikube/build/build.1251957683 --cgroup-manager=cgroupfs: (1.411054714s)
I1029 08:28:43.168848   47896 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1251957683
I1029 08:28:43.177643   47896 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1251957683.tar
I1029 08:28:43.185833   47896 build_images.go:218] Built localhost/my-image:functional-985165 from /tmp/build.1251957683.tar
I1029 08:28:43.185863   47896 build_images.go:134] succeeded building to: functional-985165
I1029 08:28:43.185867   47896 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-985165
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-985165 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-985165 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-985165 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-985165 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 41850: os: process already finished
helpers_test.go:525: unable to kill pid 41584: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-985165 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-985165 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [4020cd41-a528-476f-938c-82794163d4e9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [4020cd41-a528-476f-938c-82794163d4e9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.003624139s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image rm kicbase/echo-server:functional-985165 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-985165 /tmp/TestFunctionalparallelMountCmdany-port3656034961/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761726500846629071" to /tmp/TestFunctionalparallelMountCmdany-port3656034961/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761726500846629071" to /tmp/TestFunctionalparallelMountCmdany-port3656034961/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761726500846629071" to /tmp/TestFunctionalparallelMountCmdany-port3656034961/001/test-1761726500846629071
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "findmnt -T /mount-9p | grep 9p"
I1029 08:28:20.897607    7218 detect.go:223] nested VM detected
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-985165 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (350.783265ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1029 08:28:21.197778    7218 retry.go:31] will retry after 676.385715ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 29 08:28 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 29 08:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 29 08:28 test-1761726500846629071
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh cat /mount-9p/test-1761726500846629071
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-985165 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [71b05693-6e0a-457a-949e-8a05f8466b23] Pending
helpers_test.go:352: "busybox-mount" [71b05693-6e0a-457a-949e-8a05f8466b23] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [71b05693-6e0a-457a-949e-8a05f8466b23] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [71b05693-6e0a-457a-949e-8a05f8466b23] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003183054s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-985165 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-985165 /tmp/TestFunctionalparallelMountCmdany-port3656034961/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-985165 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.202.6 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-985165 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-985165 /tmp/TestFunctionalparallelMountCmdspecific-port3267081488/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-985165 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (301.731869ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1029 08:28:29.343466    7218 retry.go:31] will retry after 674.863502ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-985165 /tmp/TestFunctionalparallelMountCmdspecific-port3267081488/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-985165 ssh "sudo umount -f /mount-9p": exit status 1 (290.874273ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-985165 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-985165 /tmp/TestFunctionalparallelMountCmdspecific-port3267081488/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-985165 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1945535101/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-985165 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1945535101/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-985165 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1945535101/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-985165 ssh "findmnt -T" /mount1: exit status 1 (358.692537ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1029 08:28:31.498136    7218 retry.go:31] will retry after 251.634021ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-985165 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-985165 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1945535101/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-985165 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1945535101/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-985165 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1945535101/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "358.575629ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "62.178625ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "346.160657ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "63.880912ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-985165 service list: (1.741022843s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-985165 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-985165 service list -o json: (1.731801245s)
functional_test.go:1504: Took "1.731907521s" to run "out/minikube-linux-amd64 -p functional-985165 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.73s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-985165
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-985165
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-985165
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (126.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-043366 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m5.746111905s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (126.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-043366 kubectl -- rollout status deployment/busybox: (1.988980379s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- exec busybox-7b57f96db7-bs5fw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- exec busybox-7b57f96db7-sfbj4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- exec busybox-7b57f96db7-z4zgm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- exec busybox-7b57f96db7-bs5fw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- exec busybox-7b57f96db7-sfbj4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- exec busybox-7b57f96db7-z4zgm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- exec busybox-7b57f96db7-bs5fw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- exec busybox-7b57f96db7-sfbj4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- exec busybox-7b57f96db7-z4zgm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- exec busybox-7b57f96db7-bs5fw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- exec busybox-7b57f96db7-bs5fw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- exec busybox-7b57f96db7-sfbj4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- exec busybox-7b57f96db7-sfbj4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- exec busybox-7b57f96db7-z4zgm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 kubectl -- exec busybox-7b57f96db7-z4zgm -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-043366 node add --alsologtostderr -v 5: (53.047095525s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-043366 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp testdata/cp-test.txt ha-043366:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp ha-043366:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2456543405/001/cp-test_ha-043366.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp ha-043366:/home/docker/cp-test.txt ha-043366-m02:/home/docker/cp-test_ha-043366_ha-043366-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m02 "sudo cat /home/docker/cp-test_ha-043366_ha-043366-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp ha-043366:/home/docker/cp-test.txt ha-043366-m03:/home/docker/cp-test_ha-043366_ha-043366-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m03 "sudo cat /home/docker/cp-test_ha-043366_ha-043366-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp ha-043366:/home/docker/cp-test.txt ha-043366-m04:/home/docker/cp-test_ha-043366_ha-043366-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m04 "sudo cat /home/docker/cp-test_ha-043366_ha-043366-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp testdata/cp-test.txt ha-043366-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp ha-043366-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2456543405/001/cp-test_ha-043366-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp ha-043366-m02:/home/docker/cp-test.txt ha-043366:/home/docker/cp-test_ha-043366-m02_ha-043366.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366 "sudo cat /home/docker/cp-test_ha-043366-m02_ha-043366.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp ha-043366-m02:/home/docker/cp-test.txt ha-043366-m03:/home/docker/cp-test_ha-043366-m02_ha-043366-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m03 "sudo cat /home/docker/cp-test_ha-043366-m02_ha-043366-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp ha-043366-m02:/home/docker/cp-test.txt ha-043366-m04:/home/docker/cp-test_ha-043366-m02_ha-043366-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m04 "sudo cat /home/docker/cp-test_ha-043366-m02_ha-043366-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp testdata/cp-test.txt ha-043366-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp ha-043366-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2456543405/001/cp-test_ha-043366-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp ha-043366-m03:/home/docker/cp-test.txt ha-043366:/home/docker/cp-test_ha-043366-m03_ha-043366.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366 "sudo cat /home/docker/cp-test_ha-043366-m03_ha-043366.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp ha-043366-m03:/home/docker/cp-test.txt ha-043366-m02:/home/docker/cp-test_ha-043366-m03_ha-043366-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m02 "sudo cat /home/docker/cp-test_ha-043366-m03_ha-043366-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp ha-043366-m03:/home/docker/cp-test.txt ha-043366-m04:/home/docker/cp-test_ha-043366-m03_ha-043366-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m04 "sudo cat /home/docker/cp-test_ha-043366-m03_ha-043366-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp testdata/cp-test.txt ha-043366-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp ha-043366-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2456543405/001/cp-test_ha-043366-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp ha-043366-m04:/home/docker/cp-test.txt ha-043366:/home/docker/cp-test_ha-043366-m04_ha-043366.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366 "sudo cat /home/docker/cp-test_ha-043366-m04_ha-043366.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp ha-043366-m04:/home/docker/cp-test.txt ha-043366-m02:/home/docker/cp-test_ha-043366-m04_ha-043366-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m02 "sudo cat /home/docker/cp-test_ha-043366-m04_ha-043366-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 cp ha-043366-m04:/home/docker/cp-test.txt ha-043366-m03:/home/docker/cp-test_ha-043366-m04_ha-043366-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 ssh -n ha-043366-m03 "sudo cat /home/docker/cp-test_ha-043366-m04_ha-043366-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-043366 node stop m02 --alsologtostderr -v 5: (19.193701802s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-043366 status --alsologtostderr -v 5: exit status 7 (758.47904ms)

                                                
                                                
-- stdout --
	ha-043366
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-043366-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-043366-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-043366-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:42:23.698643   72360 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:42:23.698938   72360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:42:23.698949   72360 out.go:374] Setting ErrFile to fd 2...
	I1029 08:42:23.698953   72360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:42:23.699165   72360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:42:23.699347   72360 out.go:368] Setting JSON to false
	I1029 08:42:23.699372   72360 mustload.go:66] Loading cluster: ha-043366
	I1029 08:42:23.699519   72360 notify.go:221] Checking for updates...
	I1029 08:42:23.699765   72360 config.go:182] Loaded profile config "ha-043366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:42:23.699779   72360 status.go:174] checking status of ha-043366 ...
	I1029 08:42:23.700203   72360 cli_runner.go:164] Run: docker container inspect ha-043366 --format={{.State.Status}}
	I1029 08:42:23.722885   72360 status.go:371] ha-043366 host status = "Running" (err=<nil>)
	I1029 08:42:23.722918   72360 host.go:66] Checking if "ha-043366" exists ...
	I1029 08:42:23.723232   72360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-043366
	I1029 08:42:23.742931   72360 host.go:66] Checking if "ha-043366" exists ...
	I1029 08:42:23.743295   72360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:42:23.743347   72360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-043366
	I1029 08:42:23.763400   72360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/ha-043366/id_rsa Username:docker}
	I1029 08:42:23.865373   72360 ssh_runner.go:195] Run: systemctl --version
	I1029 08:42:23.872259   72360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:42:23.886911   72360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:42:23.951848   72360 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-29 08:42:23.940246987 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 08:42:23.952633   72360 kubeconfig.go:125] found "ha-043366" server: "https://192.168.49.254:8443"
	I1029 08:42:23.952671   72360 api_server.go:166] Checking apiserver status ...
	I1029 08:42:23.952713   72360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:42:23.965770   72360 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup
	W1029 08:42:23.975082   72360 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:42:23.975141   72360 ssh_runner.go:195] Run: ls
	I1029 08:42:23.979538   72360 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1029 08:42:23.985495   72360 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1029 08:42:23.985529   72360 status.go:463] ha-043366 apiserver status = Running (err=<nil>)
	I1029 08:42:23.985542   72360 status.go:176] ha-043366 status: &{Name:ha-043366 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:42:23.985560   72360 status.go:174] checking status of ha-043366-m02 ...
	I1029 08:42:23.985898   72360 cli_runner.go:164] Run: docker container inspect ha-043366-m02 --format={{.State.Status}}
	I1029 08:42:24.006716   72360 status.go:371] ha-043366-m02 host status = "Stopped" (err=<nil>)
	I1029 08:42:24.006766   72360 status.go:384] host is not running, skipping remaining checks
	I1029 08:42:24.006773   72360 status.go:176] ha-043366-m02 status: &{Name:ha-043366-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:42:24.006799   72360 status.go:174] checking status of ha-043366-m03 ...
	I1029 08:42:24.007116   72360 cli_runner.go:164] Run: docker container inspect ha-043366-m03 --format={{.State.Status}}
	I1029 08:42:24.026582   72360 status.go:371] ha-043366-m03 host status = "Running" (err=<nil>)
	I1029 08:42:24.026606   72360 host.go:66] Checking if "ha-043366-m03" exists ...
	I1029 08:42:24.026882   72360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-043366-m03
	I1029 08:42:24.045933   72360 host.go:66] Checking if "ha-043366-m03" exists ...
	I1029 08:42:24.046220   72360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:42:24.046254   72360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-043366-m03
	I1029 08:42:24.065616   72360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/ha-043366-m03/id_rsa Username:docker}
	I1029 08:42:24.166377   72360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:42:24.180618   72360 kubeconfig.go:125] found "ha-043366" server: "https://192.168.49.254:8443"
	I1029 08:42:24.180653   72360 api_server.go:166] Checking apiserver status ...
	I1029 08:42:24.180695   72360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:42:24.193653   72360 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1168/cgroup
	W1029 08:42:24.203140   72360 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1168/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:42:24.203224   72360 ssh_runner.go:195] Run: ls
	I1029 08:42:24.207481   72360 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1029 08:42:24.211782   72360 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1029 08:42:24.211809   72360 status.go:463] ha-043366-m03 apiserver status = Running (err=<nil>)
	I1029 08:42:24.211817   72360 status.go:176] ha-043366-m03 status: &{Name:ha-043366-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:42:24.211835   72360 status.go:174] checking status of ha-043366-m04 ...
	I1029 08:42:24.212137   72360 cli_runner.go:164] Run: docker container inspect ha-043366-m04 --format={{.State.Status}}
	I1029 08:42:24.231965   72360 status.go:371] ha-043366-m04 host status = "Running" (err=<nil>)
	I1029 08:42:24.232003   72360 host.go:66] Checking if "ha-043366-m04" exists ...
	I1029 08:42:24.232278   72360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-043366-m04
	I1029 08:42:24.253451   72360 host.go:66] Checking if "ha-043366-m04" exists ...
	I1029 08:42:24.253721   72360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:42:24.253761   72360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-043366-m04
	I1029 08:42:24.273510   72360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/ha-043366-m04/id_rsa Username:docker}
	I1029 08:42:24.373595   72360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:42:24.387076   72360 status.go:176] ha-043366-m04 status: &{Name:ha-043366-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-043366 node start m02 --alsologtostderr -v 5: (8.265594095s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (104.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 stop --alsologtostderr -v 5
E1029 08:42:38.040549    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:43:12.894279    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:43:12.900731    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:43:12.912228    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:43:12.933714    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:43:12.975188    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:43:13.056746    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:43:13.218416    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:43:13.540237    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:43:14.182451    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:43:15.464163    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:43:18.027129    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-043366 stop --alsologtostderr -v 5: (44.953690354s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 start --wait true --alsologtostderr -v 5
E1029 08:43:23.149486    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:43:33.391738    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:43:53.873115    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:44:01.113951    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-043366 start --wait true --alsologtostderr -v 5: (59.231839304s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (104.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-043366 node delete m03 --alsologtostderr -v 5: (9.770710526s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 stop --alsologtostderr -v 5
E1029 08:44:34.835110    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-043366 stop --alsologtostderr -v 5: (41.722207084s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-043366 status --alsologtostderr -v 5: exit status 7 (119.675675ms)

                                                
                                                
-- stdout --
	ha-043366
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-043366-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-043366-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:45:12.811567   86335 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:45:12.811852   86335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:45:12.811864   86335 out.go:374] Setting ErrFile to fd 2...
	I1029 08:45:12.811868   86335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:45:12.812165   86335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:45:12.812372   86335 out.go:368] Setting JSON to false
	I1029 08:45:12.812401   86335 mustload.go:66] Loading cluster: ha-043366
	I1029 08:45:12.812596   86335 notify.go:221] Checking for updates...
	I1029 08:45:12.812907   86335 config.go:182] Loaded profile config "ha-043366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:45:12.812924   86335 status.go:174] checking status of ha-043366 ...
	I1029 08:45:12.813482   86335 cli_runner.go:164] Run: docker container inspect ha-043366 --format={{.State.Status}}
	I1029 08:45:12.832659   86335 status.go:371] ha-043366 host status = "Stopped" (err=<nil>)
	I1029 08:45:12.832683   86335 status.go:384] host is not running, skipping remaining checks
	I1029 08:45:12.832693   86335 status.go:176] ha-043366 status: &{Name:ha-043366 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:45:12.832721   86335 status.go:174] checking status of ha-043366-m02 ...
	I1029 08:45:12.832982   86335 cli_runner.go:164] Run: docker container inspect ha-043366-m02 --format={{.State.Status}}
	I1029 08:45:12.851677   86335 status.go:371] ha-043366-m02 host status = "Stopped" (err=<nil>)
	I1029 08:45:12.851701   86335 status.go:384] host is not running, skipping remaining checks
	I1029 08:45:12.851708   86335 status.go:176] ha-043366-m02 status: &{Name:ha-043366-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:45:12.851734   86335 status.go:174] checking status of ha-043366-m04 ...
	I1029 08:45:12.851986   86335 cli_runner.go:164] Run: docker container inspect ha-043366-m04 --format={{.State.Status}}
	I1029 08:45:12.870697   86335 status.go:371] ha-043366-m04 host status = "Stopped" (err=<nil>)
	I1029 08:45:12.870742   86335 status.go:384] host is not running, skipping remaining checks
	I1029 08:45:12.870752   86335 status.go:176] ha-043366-m04 status: &{Name:ha-043366-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1029 08:45:56.756528    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-043366 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.64571622s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (61.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-043366 node add --control-plane --alsologtostderr -v 5: (1m0.370548667s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-043366 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (61.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.95s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-478107 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1029 08:47:38.041151    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-478107 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (40.945452433s)
--- PASS: TestJSONOutput/start/Command (40.95s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.21s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-478107 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-478107 --output=json --user=testUser: (6.209511964s)
--- PASS: TestJSONOutput/stop/Command (6.21s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-207243 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-207243 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (82.479516ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"148d75f4-ce63-4893-92c7-25bc6b882716","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-207243] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"868bb056-2230-4399-bfbf-3bcf1ecb2bb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21800"}}
	{"specversion":"1.0","id":"904431e4-2c81-430f-beef-fbe96bfbfd78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"434c3292-24c7-4c15-964c-6e6dac6bc006","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig"}}
	{"specversion":"1.0","id":"7bf6164b-66e3-458a-9318-18e1cb6df9f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube"}}
	{"specversion":"1.0","id":"23a0c040-0cd4-4ab6-9569-48ebe7d4cb91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a2b581c0-8c9a-47d0-9158-dce2db5285ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"72c51147-6b98-4239-b320-a004d818cbf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-207243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-207243
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-069398 --network=
E1029 08:48:40.598200    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-069398 --network=: (27.335313007s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-069398" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-069398
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-069398: (2.177278386s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.53s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.49s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-662227 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-662227 --network=bridge: (22.443258967s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-662227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-662227
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-662227: (2.025923426s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.49s)

                                                
                                    
x
+
TestKicExistingNetwork (26.82s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1029 08:49:10.560203    7218 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1029 08:49:10.578534    7218 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1029 08:49:10.578615    7218 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1029 08:49:10.578637    7218 cli_runner.go:164] Run: docker network inspect existing-network
W1029 08:49:10.596547    7218 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1029 08:49:10.596577    7218 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1029 08:49:10.596597    7218 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1029 08:49:10.596750    7218 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1029 08:49:10.615250    7218 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b27c046ec42 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:96:bc:cb:4a:50:f2} reservation:<nil>}
I1029 08:49:10.615707    7218 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b98dd0}
I1029 08:49:10.615735    7218 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1029 08:49:10.615775    7218 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1029 08:49:10.674558    7218 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-583058 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-583058 --network=existing-network: (24.635389306s)
helpers_test.go:175: Cleaning up "existing-network-583058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-583058
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-583058: (2.036776827s)
I1029 08:49:37.365582    7218 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.82s)

                                                
                                    
x
+
TestKicCustomSubnet (24.97s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-232665 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-232665 --subnet=192.168.60.0/24: (22.751704019s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-232665 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-232665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-232665
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-232665: (2.196735941s)
--- PASS: TestKicCustomSubnet (24.97s)

                                                
                                    
x
+
TestKicStaticIP (28.76s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-434232 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-434232 --static-ip=192.168.200.200: (26.397875821s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-434232 ip
helpers_test.go:175: Cleaning up "static-ip-434232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-434232
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-434232: (2.21477724s)
--- PASS: TestKicStaticIP (28.76s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (54.93s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-767715 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-767715 --driver=docker  --container-runtime=crio: (25.48571619s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-770103 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-770103 --driver=docker  --container-runtime=crio: (23.253673437s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-767715
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-770103
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-770103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-770103
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-770103: (2.456826579s)
helpers_test.go:175: Cleaning up "first-767715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-767715
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-767715: (2.435120162s)
--- PASS: TestMinikubeProfile (54.93s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-886894 --memory=3072 --mount-string /tmp/TestMountStartserial2325408224/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-886894 --memory=3072 --mount-string /tmp/TestMountStartserial2325408224/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.002936096s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-886894 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-901224 --memory=3072 --mount-string /tmp/TestMountStartserial2325408224/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-901224 --memory=3072 --mount-string /tmp/TestMountStartserial2325408224/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.351556161s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-901224 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.76s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-886894 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-886894 --alsologtostderr -v=5: (1.75586334s)
--- PASS: TestMountStart/serial/DeleteFirst (1.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-901224 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-901224
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-901224: (1.268179259s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.35s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-901224
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-901224: (6.351590907s)
--- PASS: TestMountStart/serial/RestartStopped (7.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-901224 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (93.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-850909 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1029 08:52:38.041230    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:53:12.893293    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-850909 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m32.863383533s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (93.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-850909 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-850909 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-850909 -- rollout status deployment/busybox: (1.996626826s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-850909 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-850909 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-850909 -- exec busybox-7b57f96db7-cwrbb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-850909 -- exec busybox-7b57f96db7-kjstb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-850909 -- exec busybox-7b57f96db7-cwrbb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-850909 -- exec busybox-7b57f96db7-kjstb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-850909 -- exec busybox-7b57f96db7-cwrbb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-850909 -- exec busybox-7b57f96db7-kjstb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-850909 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-850909 -- exec busybox-7b57f96db7-cwrbb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-850909 -- exec busybox-7b57f96db7-cwrbb -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-850909 -- exec busybox-7b57f96db7-kjstb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-850909 -- exec busybox-7b57f96db7-kjstb -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-850909 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-850909 -v=5 --alsologtostderr: (56.069792165s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.73s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-850909 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 cp testdata/cp-test.txt multinode-850909:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 cp multinode-850909:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4117068677/001/cp-test_multinode-850909.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 cp multinode-850909:/home/docker/cp-test.txt multinode-850909-m02:/home/docker/cp-test_multinode-850909_multinode-850909-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909-m02 "sudo cat /home/docker/cp-test_multinode-850909_multinode-850909-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 cp multinode-850909:/home/docker/cp-test.txt multinode-850909-m03:/home/docker/cp-test_multinode-850909_multinode-850909-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909-m03 "sudo cat /home/docker/cp-test_multinode-850909_multinode-850909-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 cp testdata/cp-test.txt multinode-850909-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 cp multinode-850909-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4117068677/001/cp-test_multinode-850909-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 cp multinode-850909-m02:/home/docker/cp-test.txt multinode-850909:/home/docker/cp-test_multinode-850909-m02_multinode-850909.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909 "sudo cat /home/docker/cp-test_multinode-850909-m02_multinode-850909.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 cp multinode-850909-m02:/home/docker/cp-test.txt multinode-850909-m03:/home/docker/cp-test_multinode-850909-m02_multinode-850909-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909-m03 "sudo cat /home/docker/cp-test_multinode-850909-m02_multinode-850909-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 cp testdata/cp-test.txt multinode-850909-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 cp multinode-850909-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4117068677/001/cp-test_multinode-850909-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 cp multinode-850909-m03:/home/docker/cp-test.txt multinode-850909:/home/docker/cp-test_multinode-850909-m03_multinode-850909.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909 "sudo cat /home/docker/cp-test_multinode-850909-m03_multinode-850909.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 cp multinode-850909-m03:/home/docker/cp-test.txt multinode-850909-m02:/home/docker/cp-test_multinode-850909-m03_multinode-850909-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 ssh -n multinode-850909-m02 "sudo cat /home/docker/cp-test_multinode-850909-m03_multinode-850909-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-850909 node stop m03: (1.28293205s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-850909 status: exit status 7 (512.874009ms)

                                                
                                                
-- stdout --
	multinode-850909
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-850909-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-850909-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-850909 status --alsologtostderr: exit status 7 (517.430663ms)

                                                
                                                
-- stdout --
	multinode-850909
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-850909-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-850909-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:54:40.956088  146187 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:54:40.956361  146187 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:54:40.956371  146187 out.go:374] Setting ErrFile to fd 2...
	I1029 08:54:40.956376  146187 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:54:40.956598  146187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:54:40.956783  146187 out.go:368] Setting JSON to false
	I1029 08:54:40.956805  146187 mustload.go:66] Loading cluster: multinode-850909
	I1029 08:54:40.956940  146187 notify.go:221] Checking for updates...
	I1029 08:54:40.957222  146187 config.go:182] Loaded profile config "multinode-850909": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:54:40.957237  146187 status.go:174] checking status of multinode-850909 ...
	I1029 08:54:40.957697  146187 cli_runner.go:164] Run: docker container inspect multinode-850909 --format={{.State.Status}}
	I1029 08:54:40.976393  146187 status.go:371] multinode-850909 host status = "Running" (err=<nil>)
	I1029 08:54:40.976418  146187 host.go:66] Checking if "multinode-850909" exists ...
	I1029 08:54:40.976714  146187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-850909
	I1029 08:54:40.995063  146187 host.go:66] Checking if "multinode-850909" exists ...
	I1029 08:54:40.995379  146187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:54:40.995427  146187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-850909
	I1029 08:54:41.014426  146187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/multinode-850909/id_rsa Username:docker}
	I1029 08:54:41.113611  146187 ssh_runner.go:195] Run: systemctl --version
	I1029 08:54:41.120320  146187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:54:41.133254  146187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 08:54:41.192962  146187 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-29 08:54:41.181925479 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 08:54:41.193535  146187 kubeconfig.go:125] found "multinode-850909" server: "https://192.168.67.2:8443"
	I1029 08:54:41.193566  146187 api_server.go:166] Checking apiserver status ...
	I1029 08:54:41.193600  146187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:54:41.206461  146187 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1239/cgroup
	W1029 08:54:41.215764  146187 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1239/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:54:41.215827  146187 ssh_runner.go:195] Run: ls
	I1029 08:54:41.220236  146187 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1029 08:54:41.224691  146187 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1029 08:54:41.224718  146187 status.go:463] multinode-850909 apiserver status = Running (err=<nil>)
	I1029 08:54:41.224729  146187 status.go:176] multinode-850909 status: &{Name:multinode-850909 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:54:41.224744  146187 status.go:174] checking status of multinode-850909-m02 ...
	I1029 08:54:41.225096  146187 cli_runner.go:164] Run: docker container inspect multinode-850909-m02 --format={{.State.Status}}
	I1029 08:54:41.243638  146187 status.go:371] multinode-850909-m02 host status = "Running" (err=<nil>)
	I1029 08:54:41.243670  146187 host.go:66] Checking if "multinode-850909-m02" exists ...
	I1029 08:54:41.243948  146187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-850909-m02
	I1029 08:54:41.262100  146187 host.go:66] Checking if "multinode-850909-m02" exists ...
	I1029 08:54:41.262348  146187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:54:41.262380  146187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-850909-m02
	I1029 08:54:41.281098  146187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21800-3727/.minikube/machines/multinode-850909-m02/id_rsa Username:docker}
	I1029 08:54:41.380540  146187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:54:41.393231  146187 status.go:176] multinode-850909-m02 status: &{Name:multinode-850909-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:54:41.393288  146187 status.go:174] checking status of multinode-850909-m03 ...
	I1029 08:54:41.393545  146187 cli_runner.go:164] Run: docker container inspect multinode-850909-m03 --format={{.State.Status}}
	I1029 08:54:41.411896  146187 status.go:371] multinode-850909-m03 host status = "Stopped" (err=<nil>)
	I1029 08:54:41.411925  146187 status.go:384] host is not running, skipping remaining checks
	I1029 08:54:41.411933  146187 status.go:176] multinode-850909-m03 status: &{Name:multinode-850909-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-850909 node start m03 -v=5 --alsologtostderr: (6.627247398s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-850909
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-850909
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-850909: (29.646153962s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-850909 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-850909 --wait=true -v=5 --alsologtostderr: (44.206852613s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-850909
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-850909 node delete m03: (4.730212027s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-850909 stop: (28.455880042s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-850909 status: exit status 7 (101.66702ms)

                                                
                                                
-- stdout --
	multinode-850909
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-850909-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-850909 status --alsologtostderr: exit status 7 (99.670419ms)

                                                
                                                
-- stdout --
	multinode-850909
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-850909-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:56:36.720690  155840 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:56:36.721145  155840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:56:36.721154  155840 out.go:374] Setting ErrFile to fd 2...
	I1029 08:56:36.721159  155840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:56:36.721348  155840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 08:56:36.721527  155840 out.go:368] Setting JSON to false
	I1029 08:56:36.721549  155840 mustload.go:66] Loading cluster: multinode-850909
	I1029 08:56:36.721698  155840 notify.go:221] Checking for updates...
	I1029 08:56:36.721904  155840 config.go:182] Loaded profile config "multinode-850909": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:56:36.721917  155840 status.go:174] checking status of multinode-850909 ...
	I1029 08:56:36.722342  155840 cli_runner.go:164] Run: docker container inspect multinode-850909 --format={{.State.Status}}
	I1029 08:56:36.741253  155840 status.go:371] multinode-850909 host status = "Stopped" (err=<nil>)
	I1029 08:56:36.741298  155840 status.go:384] host is not running, skipping remaining checks
	I1029 08:56:36.741309  155840 status.go:176] multinode-850909 status: &{Name:multinode-850909 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:56:36.741357  155840 status.go:174] checking status of multinode-850909-m02 ...
	I1029 08:56:36.741639  155840 cli_runner.go:164] Run: docker container inspect multinode-850909-m02 --format={{.State.Status}}
	I1029 08:56:36.760048  155840 status.go:371] multinode-850909-m02 host status = "Stopped" (err=<nil>)
	I1029 08:56:36.760071  155840 status.go:384] host is not running, skipping remaining checks
	I1029 08:56:36.760077  155840 status.go:176] multinode-850909-m02 status: &{Name:multinode-850909-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-850909 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-850909 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.177361858s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-850909 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.78s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-850909
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-850909-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-850909-m02 --driver=docker  --container-runtime=crio: exit status 14 (76.366805ms)

                                                
                                                
-- stdout --
	* [multinode-850909-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-850909-m02' is duplicated with machine name 'multinode-850909-m02' in profile 'multinode-850909'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-850909-m03 --driver=docker  --container-runtime=crio
E1029 08:57:38.041155    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-850909-m03 --driver=docker  --container-runtime=crio: (21.465836264s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-850909
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-850909: exit status 80 (298.772461ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-850909 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-850909-m03 already exists in multinode-850909-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-850909-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-850909-m03: (2.427881716s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.33s)

                                                
                                    
x
+
TestPreload (109.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-421842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1029 08:58:12.892898    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-421842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (46.578788986s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-421842 image pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-421842
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-421842: (5.882260164s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-421842 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1029 08:59:35.961232    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/functional-985165/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-421842 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (53.708820614s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-421842 image list
helpers_test.go:175: Cleaning up "test-preload-421842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-421842
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-421842: (2.457547822s)
--- PASS: TestPreload (109.81s)

                                                
                                    
x
+
TestScheduledStopUnix (97.42s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-953634 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-953634 --memory=3072 --driver=docker  --container-runtime=crio: (20.999400702s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-953634 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-953634 -n scheduled-stop-953634
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-953634 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1029 09:00:07.324451    7218 retry.go:31] will retry after 135.543µs: open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/scheduled-stop-953634/pid: no such file or directory
I1029 09:00:07.325611    7218 retry.go:31] will retry after 122.839µs: open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/scheduled-stop-953634/pid: no such file or directory
I1029 09:00:07.326759    7218 retry.go:31] will retry after 334.887µs: open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/scheduled-stop-953634/pid: no such file or directory
I1029 09:00:07.327904    7218 retry.go:31] will retry after 266.214µs: open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/scheduled-stop-953634/pid: no such file or directory
I1029 09:00:07.329042    7218 retry.go:31] will retry after 546.264µs: open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/scheduled-stop-953634/pid: no such file or directory
I1029 09:00:07.330169    7218 retry.go:31] will retry after 719.223µs: open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/scheduled-stop-953634/pid: no such file or directory
I1029 09:00:07.331276    7218 retry.go:31] will retry after 1.253784ms: open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/scheduled-stop-953634/pid: no such file or directory
I1029 09:00:07.333504    7218 retry.go:31] will retry after 1.472081ms: open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/scheduled-stop-953634/pid: no such file or directory
I1029 09:00:07.335686    7218 retry.go:31] will retry after 3.654299ms: open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/scheduled-stop-953634/pid: no such file or directory
I1029 09:00:07.339850    7218 retry.go:31] will retry after 2.70899ms: open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/scheduled-stop-953634/pid: no such file or directory
I1029 09:00:07.343081    7218 retry.go:31] will retry after 7.57115ms: open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/scheduled-stop-953634/pid: no such file or directory
I1029 09:00:07.351316    7218 retry.go:31] will retry after 9.582856ms: open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/scheduled-stop-953634/pid: no such file or directory
I1029 09:00:07.361574    7218 retry.go:31] will retry after 10.488521ms: open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/scheduled-stop-953634/pid: no such file or directory
I1029 09:00:07.372858    7218 retry.go:31] will retry after 16.666272ms: open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/scheduled-stop-953634/pid: no such file or directory
I1029 09:00:07.390146    7218 retry.go:31] will retry after 40.438644ms: open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/scheduled-stop-953634/pid: no such file or directory
I1029 09:00:07.431459    7218 retry.go:31] will retry after 46.91354ms: open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/scheduled-stop-953634/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-953634 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-953634 -n scheduled-stop-953634
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-953634
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-953634 --schedule 15s
E1029 09:00:41.117358    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-953634
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-953634: exit status 7 (83.953927ms)

                                                
                                                
-- stdout --
	scheduled-stop-953634
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-953634 -n scheduled-stop-953634
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-953634 -n scheduled-stop-953634: exit status 7 (82.894745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-953634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-953634
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-953634: (4.837171308s)
--- PASS: TestScheduledStopUnix (97.42s)

                                                
                                    
x
+
TestInsufficientStorage (9.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-518168 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-518168 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.16172234s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9a6078e0-5c85-406b-b08e-8ae35ef48f25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-518168] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0cb4d853-fd63-421a-8573-42502f750e98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21800"}}
	{"specversion":"1.0","id":"357cf0e5-008b-4154-b8b0-7ee845509b12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1e9106e7-686a-46eb-a224-6b58bb95db38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig"}}
	{"specversion":"1.0","id":"06870061-fa81-43df-81a9-3d79f01d2fb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube"}}
	{"specversion":"1.0","id":"8afa4ee6-58f2-4ab0-911a-7af61476750d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"729b672e-c757-4929-b153-caf53b7bf927","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"58cdc172-70c0-4615-94d3-50e6c61d8030","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3f8279db-7d38-4741-b1b1-3d528f9a82e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c1201466-7274-4ffc-bb46-ca4611ebea49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc709a47-6c5f-4c65-aee4-1e1bbbbe719c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ff1ef16b-f78f-4086-b0bf-373aac48ee22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-518168\" primary control-plane node in \"insufficient-storage-518168\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e50bc7d-62e1-47ae-b61a-89ac871138a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"34cc3672-fb11-4745-97f9-dc9e05395152","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8ca891d0-c16f-4bbd-9b27-4f90c5553d18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-518168 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-518168 --output=json --layout=cluster: exit status 7 (300.175775ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-518168","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-518168","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1029 09:01:30.737371  176244 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-518168" does not appear in /home/jenkins/minikube-integration/21800-3727/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-518168 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-518168 --output=json --layout=cluster: exit status 7 (299.909821ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-518168","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-518168","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1029 09:01:31.036942  176355 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-518168" does not appear in /home/jenkins/minikube-integration/21800-3727/kubeconfig
	E1029 09:01:31.048049  176355 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/insufficient-storage-518168/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-518168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-518168
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-518168: (1.969038057s)
--- PASS: TestInsufficientStorage (9.73s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.9s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.217281168 start -p running-upgrade-507955 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.217281168 start -p running-upgrade-507955 --memory=3072 --vm-driver=docker  --container-runtime=crio: (43.968076732s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-507955 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-507955 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.986312968s)
helpers_test.go:175: Cleaning up "running-upgrade-507955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-507955
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-507955: (3.457599125s)
--- PASS: TestRunningBinaryUpgrade (70.90s)

                                                
                                    
x
+
TestKubernetesUpgrade (304.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-197158 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-197158 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.222866775s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-197158
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-197158: (1.292310809s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-197158 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-197158 status --format={{.Host}}: exit status 7 (83.502627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-197158 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-197158 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m27.57884132s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-197158 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-197158 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-197158 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (915.751518ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-197158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-197158
	    minikube start -p kubernetes-upgrade-197158 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1971582 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-197158 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-197158 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-197158 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.999444854s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-197158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-197158
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-197158: (2.830120977s)
--- PASS: TestKubernetesUpgrade (304.04s)

                                                
                                    
x
+
TestMissingContainerUpgrade (72.67s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2727210514 start -p missing-upgrade-438344 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2727210514 start -p missing-upgrade-438344 --memory=3072 --driver=docker  --container-runtime=crio: (25.211707712s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-438344
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-438344: (3.578925339s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-438344
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-438344 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-438344 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.853912488s)
helpers_test.go:175: Cleaning up "missing-upgrade-438344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-438344
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-438344: (2.453337377s)
--- PASS: TestMissingContainerUpgrade (72.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestPause/serial/Start (90.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-470577 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-470577 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m30.442483157s)
--- PASS: TestPause/serial/Start (90.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (60.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2467577505 start -p stopped-upgrade-480100 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2467577505 start -p stopped-upgrade-480100 --memory=3072 --vm-driver=docker  --container-runtime=crio: (43.568737303s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2467577505 -p stopped-upgrade-480100 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2467577505 -p stopped-upgrade-480100 stop: (1.989525217s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-480100 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-480100 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (15.05713587s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (60.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-480100
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-480100: (1.095779202s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-808010 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-808010 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (88.205881ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-808010] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-808010 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-808010 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.065480412s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-808010 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-240549 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-240549 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (223.436202ms)

                                                
                                                
-- stdout --
	* [false-240549] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:02:43.786592  194087 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:02:43.786922  194087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:02:43.786934  194087 out.go:374] Setting ErrFile to fd 2...
	I1029 09:02:43.786940  194087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:02:43.788085  194087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-3727/.minikube/bin
	I1029 09:02:43.788918  194087 out.go:368] Setting JSON to false
	I1029 09:02:43.790788  194087 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2712,"bootTime":1761725852,"procs":272,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:02:43.790921  194087 start.go:143] virtualization: kvm guest
	I1029 09:02:43.792837  194087 out.go:179] * [false-240549] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:02:43.794439  194087 notify.go:221] Checking for updates...
	I1029 09:02:43.794900  194087 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:02:43.796167  194087 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:02:43.797859  194087 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-3727/kubeconfig
	I1029 09:02:43.799208  194087 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-3727/.minikube
	I1029 09:02:43.801938  194087 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:02:43.803402  194087 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:02:43.805846  194087 config.go:182] Loaded profile config "NoKubernetes-808010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:02:43.806020  194087 config.go:182] Loaded profile config "pause-470577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:02:43.806136  194087 config.go:182] Loaded profile config "running-upgrade-507955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1029 09:02:43.806247  194087 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:02:43.838772  194087 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1029 09:02:43.838936  194087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1029 09:02:43.909359  194087 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-29 09:02:43.896511852 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1029 09:02:43.909485  194087 docker.go:319] overlay module found
	I1029 09:02:43.911049  194087 out.go:179] * Using the docker driver based on user configuration
	I1029 09:02:43.912503  194087 start.go:309] selected driver: docker
	I1029 09:02:43.912523  194087 start.go:930] validating driver "docker" against <nil>
	I1029 09:02:43.912540  194087 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:02:43.914508  194087 out.go:203] 
	W1029 09:02:43.915712  194087 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1029 09:02:43.916893  194087 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-240549 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-240549

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-240549

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-240549

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-240549

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-240549

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-240549

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-240549

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-240549

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-240549

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-240549

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-240549

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-240549" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-240549" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 29 Oct 2025 09:02:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-470577
contexts:
- context:
cluster: pause-470577
extensions:
- extension:
last-update: Wed, 29 Oct 2025 09:02:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-470577
name: pause-470577
current-context: ""
kind: Config
users:
- name: pause-470577
user:
client-certificate: /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/pause-470577/client.crt
client-key: /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/pause-470577/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-240549

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240549"

                                                
                                                
----------------------- debugLogs end: false-240549 [took: 4.318154922s] --------------------------------
helpers_test.go:175: Cleaning up "false-240549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-240549
--- PASS: TestNetworkPlugins/group/false (4.92s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.52s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-470577 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-470577 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.50830611s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-808010 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-808010 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.826119304s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-808010 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-808010 status -o json: exit status 2 (425.51995ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-808010","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-808010
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-808010: (2.250281688s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-808010 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-808010 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.905736701s)
--- PASS: TestNoKubernetes/serial/Start (6.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-808010 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-808010 "sudo systemctl is-active --quiet service kubelet": exit status 1 (334.417824ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.268639713s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-808010
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-808010: (1.36521001s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-808010 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-808010 --driver=docker  --container-runtime=crio: (7.477077422s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-808010 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-808010 "sudo systemctl is-active --quiet service kubelet": exit status 1 (359.485967ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-240549 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-240549 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m9.348410788s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-240549 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-240549 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m9.953489796s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-240549 "pgrep -a kubelet"
I1029 09:05:00.447819    7218 config.go:182] Loaded profile config "auto-240549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-240549 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pnxql" [4120dea4-c8d1-4820-b2e6-102d63e1aad5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pnxql" [4120dea4-c8d1-4820-b2e6-102d63e1aad5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004033584s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-240549 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-240549 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-240549 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-240549 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-240549 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.831278551s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-s82ls" [a8d6ddc2-cf26-4d3a-9849-2a53fa4beff2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003848651s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-240549 "pgrep -a kubelet"
I1029 09:05:54.339682    7218 config.go:182] Loaded profile config "kindnet-240549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-240549 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8tjpg" [704dcf48-cc14-41db-9549-656e5fdbac65] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8tjpg" [704dcf48-cc14-41db-9549-656e5fdbac65] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003912151s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-240549 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-240549 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-240549 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-7z5v2" [2ce9fcc7-1fe9-4565-88f3-08f04ea91c13] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-7z5v2" [2ce9fcc7-1fe9-4565-88f3-08f04ea91c13] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004914056s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-240549 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-240549 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (53.266256896s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-240549 "pgrep -a kubelet"
I1029 09:06:26.542082    7218 config.go:182] Loaded profile config "calico-240549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-240549 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zjdg2" [69c95275-71d5-4748-92c1-4c1f82c72ab0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zjdg2" [69c95275-71d5-4748-92c1-4c1f82c72ab0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004218065s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-240549 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-240549 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-240549 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-240549 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-240549 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m11.689046462s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-240549 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-240549 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (49.024848767s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-240549 "pgrep -a kubelet"
I1029 09:07:18.090812    7218 config.go:182] Loaded profile config "custom-flannel-240549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-240549 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x2t45" [ad39f84a-06c5-46fa-b16d-485479c0a7ec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-x2t45" [ad39f84a-06c5-46fa-b16d-485479c0a7ec] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003424754s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-240549 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-240549 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-240549 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-240549 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-240549 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m7.719410204s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-tt9p2" [1f306586-e5b1-4506-8d9c-604342da5a66] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004950648s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-240549 "pgrep -a kubelet"
I1029 09:07:54.847357    7218 config.go:182] Loaded profile config "flannel-240549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-240549 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kklgf" [80d4bc9c-3702-4819-8aa5-b8f272deefad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kklgf" [80d4bc9c-3702-4819-8aa5-b8f272deefad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003048923s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-240549 "pgrep -a kubelet"
I1029 09:08:00.467361    7218 config.go:182] Loaded profile config "enable-default-cni-240549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-240549 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2tz7z" [49b36796-5d38-436d-b2fd-ced15e2172c4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2tz7z" [49b36796-5d38-436d-b2fd-ced15e2172c4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004273251s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-240549 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-240549 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-240549 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-240549 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-240549 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-240549 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (53.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-096492 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-096492 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (53.646549535s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (53.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-043790 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-043790 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.982699096s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-834228 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-834228 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.750797467s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-240549 "pgrep -a kubelet"
I1029 09:08:56.221427    7218 config.go:182] Loaded profile config "bridge-240549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-240549 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-k76dp" [e674b1f0-1e28-4669-909f-0a0e3242dd99] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-k76dp" [e674b1f0-1e28-4669-909f-0a0e3242dd99] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005746145s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-240549 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-240549 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-240549 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)
E1029 09:11:08.485443    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/kindnet-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-096492 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1fa1733a-b2ef-4af9-af8c-342513147d4e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1fa1733a-b2ef-4af9-af8c-342513147d4e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004011882s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-096492 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.003492747s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-834228 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [47adfe3b-59ee-4d67-8d34-eb88528af861] Pending
helpers_test.go:352: "busybox" [47adfe3b-59ee-4d67-8d34-eb88528af861] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [47adfe3b-59ee-4d67-8d34-eb88528af861] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004054501s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-834228 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-043790 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [52579065-c1ba-441f-8953-b5336db20cc0] Pending
helpers_test.go:352: "busybox" [52579065-c1ba-441f-8953-b5336db20cc0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [52579065-c1ba-441f-8953-b5336db20cc0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003630066s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-043790 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-096492 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-096492 --alsologtostderr -v=3: (16.178027584s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-834228 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-834228 --alsologtostderr -v=3: (18.081574333s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-043790 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-043790 --alsologtostderr -v=3: (16.37558124s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096492 -n old-k8s-version-096492
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096492 -n old-k8s-version-096492: exit status 7 (81.183914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-096492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-096492 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-096492 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (48.370839438s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096492 -n old-k8s-version-096492
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-834228 -n embed-certs-834228
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-834228 -n embed-certs-834228: exit status 7 (95.66975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-834228 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-834228 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-834228 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.262227986s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-834228 -n embed-certs-834228
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-043790 -n no-preload-043790
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-043790 -n no-preload-043790: exit status 7 (97.130468ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-043790 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-043790 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1029 09:10:00.648790    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/auto-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:00.655174    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/auto-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:00.667265    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/auto-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:00.688662    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/auto-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:00.730348    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/auto-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:00.812111    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/auto-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:00.974277    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/auto-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:01.296183    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/auto-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:01.937718    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/auto-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:03.219463    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/auto-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:05.781222    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/auto-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-043790 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.279737301s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-043790 -n no-preload-043790
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-017274 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5a6e73ef-c304-441e-9c28-76a4f3babb6e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5a6e73ef-c304-441e-9c28-76a4f3babb6e] Running
E1029 09:10:10.904646    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/auto-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.005240727s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-017274 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (17.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-017274 --alsologtostderr -v=3
E1029 09:10:21.146075    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/auto-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-017274 --alsologtostderr -v=3: (17.02849078s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (17.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-017274 -n default-k8s-diff-port-017274
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-017274 -n default-k8s-diff-port-017274: exit status 7 (82.213681ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-017274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-017274 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.846080463s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-017274 -n default-k8s-diff-port-017274
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-zt5m2" [f46f157e-bc03-44ee-8351-6e8f3b4da48e] Running
E1029 09:10:41.628373    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/auto-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005573194s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-zt5m2" [f46f157e-bc03-44ee-8351-6e8f3b4da48e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005134681s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-096492 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-c42hl" [f4b1270f-98af-4824-964a-6a694dbaa678] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004040458s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ntb8h" [899c5d21-61f7-485e-9ee7-21097c5687fe] Running
E1029 09:10:47.989484    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/kindnet-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:47.995987    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/kindnet-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:48.007436    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/kindnet-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:48.028928    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/kindnet-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:48.070532    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/kindnet-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:48.151878    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/kindnet-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:48.313951    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/kindnet-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:10:48.635601    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/kindnet-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003991016s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-096492 image list --format=json
E1029 09:10:49.277811    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/kindnet-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-c42hl" [f4b1270f-98af-4824-964a-6a694dbaa678] Running
E1029 09:10:50.560115    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/kindnet-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00380736s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-834228 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ntb8h" [899c5d21-61f7-485e-9ee7-21097c5687fe] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003756821s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-043790 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-834228 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-043790 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-259430 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-259430 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (27.082057363s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4kfgv" [1a5c8fb6-1c63-42d2-8b52-de30e9a56c2c] Running
E1029 09:11:20.202550    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/calico-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:11:20.209102    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/calico-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:11:20.220608    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/calico-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:11:20.242132    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/calico-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:11:20.283530    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/calico-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:11:20.365128    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/calico-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:11:20.527202    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/calico-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:11:20.848906    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/calico-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:11:21.491202    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/calico-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:11:22.590128    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/auto-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:11:22.772628    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/calico-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00564736s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4kfgv" [1a5c8fb6-1c63-42d2-8b52-de30e9a56c2c] Running
E1029 09:11:25.334639    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/calico-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004020802s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-017274 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-259430 --alsologtostderr -v=3
E1029 09:11:28.966765    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/kindnet-240549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-259430 --alsologtostderr -v=3: (7.954953449s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-017274 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-259430 -n newest-cni-259430
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-259430 -n newest-cni-259430: exit status 7 (84.76562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-259430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-259430 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-259430 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.124789114s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-259430 -n newest-cni-259430
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-259430 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:35: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
E1029 09:02:38.040463    7218 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/addons-306574/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:636: 
----------------------- debugLogs start: kubenet-240549 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-240549

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-240549

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-240549

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-240549

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-240549

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-240549

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-240549

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-240549

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-240549

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-240549

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-240549

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-240549" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-240549" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 29 Oct 2025 09:02:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-470577
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 29 Oct 2025 09:02:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-507955
contexts:
- context:
cluster: pause-470577
extensions:
- extension:
last-update: Wed, 29 Oct 2025 09:02:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-470577
name: pause-470577
- context:
cluster: running-upgrade-507955
extensions:
- extension:
last-update: Wed, 29 Oct 2025 09:02:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: running-upgrade-507955
name: running-upgrade-507955
current-context: running-upgrade-507955
kind: Config
users:
- name: pause-470577
user:
client-certificate: /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/pause-470577/client.crt
client-key: /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/pause-470577/client.key
- name: running-upgrade-507955
user:
client-certificate: /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/running-upgrade-507955/client.crt
client-key: /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/running-upgrade-507955/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-240549

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240549"

                                                
                                                
----------------------- debugLogs end: kubenet-240549 [took: 5.325786156s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-240549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-240549
--- SKIP: TestNetworkPlugins/group/kubenet (5.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-240549 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-240549

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-240549

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-240549

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-240549

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-240549

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-240549

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-240549

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-240549

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-240549

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-240549

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-240549

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-240549" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-240549

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-240549

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-240549

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-240549

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-240549" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-240549" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21800-3727/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 29 Oct 2025 09:02:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-470577
contexts:
- context:
cluster: pause-470577
extensions:
- extension:
last-update: Wed, 29 Oct 2025 09:02:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-470577
name: pause-470577
current-context: ""
kind: Config
users:
- name: pause-470577
user:
client-certificate: /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/pause-470577/client.crt
client-key: /home/jenkins/minikube-integration/21800-3727/.minikube/profiles/pause-470577/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-240549

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-240549" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240549"

                                                
                                                
----------------------- debugLogs end: cilium-240549 [took: 4.416083712s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-240549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-240549
--- SKIP: TestNetworkPlugins/group/cilium (4.60s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-318335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-318335
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard